Test Report: QEMU_macOS 19875

                    
                      9b6a7d882f95daeab36015d5b0633b1bcea3cc50:2024-10-28:36842
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.93
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 9.9
27 TestAddons/Setup 10.3
28 TestCertOptions 10.16
29 TestCertExpiration 195.23
30 TestDockerFlags 10.12
31 TestForceSystemdFlag 10.3
32 TestForceSystemdEnv 10.98
38 TestErrorSpam/setup 9.88
47 TestFunctional/serial/StartWithProxy 10.06
49 TestFunctional/serial/SoftStart 5.27
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.05
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.17
61 TestFunctional/serial/MinikubeKubectlCmd 0.74
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.2
63 TestFunctional/serial/ExtraConfig 5.27
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.07
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.21
73 TestFunctional/parallel/StatusCmd 0.14
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.04
81 TestFunctional/parallel/SSHCmd 0.13
82 TestFunctional/parallel/CpCmd 0.3
84 TestFunctional/parallel/FileSync 0.08
85 TestFunctional/parallel/CertSync 0.31
89 TestFunctional/parallel/NodeLabels 0.07
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
95 TestFunctional/parallel/Version/components 0.05
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.13
102 TestFunctional/parallel/DockerEnv/bash 0.05
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
106 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
107 TestFunctional/parallel/ServiceCmd/List 0.05
108 TestFunctional/parallel/ServiceCmd/JSONOutput 0.05
109 TestFunctional/parallel/ServiceCmd/HTTPS 0.05
110 TestFunctional/parallel/ServiceCmd/Format 0.05
111 TestFunctional/parallel/ServiceCmd/URL 0.05
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.09
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 78.48
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.33
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.29
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.16
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.08
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 30.05
141 TestMultiControlPlane/serial/StartCluster 10.17
142 TestMultiControlPlane/serial/DeployApp 120.44
143 TestMultiControlPlane/serial/PingHostFromPods 0.1
144 TestMultiControlPlane/serial/AddWorkerNode 0.08
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.09
147 TestMultiControlPlane/serial/CopyFile 0.07
148 TestMultiControlPlane/serial/StopSecondaryNode 0.12
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.09
150 TestMultiControlPlane/serial/RestartSecondaryNode 50.31
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.09
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 7.24
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.09
155 TestMultiControlPlane/serial/StopCluster 3.65
156 TestMultiControlPlane/serial/RestartCluster 5.27
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.09
158 TestMultiControlPlane/serial/AddSecondaryNode 0.08
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.09
162 TestImageBuild/serial/Setup 10.22
165 TestJSONOutput/start/Command 9.84
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.25
197 TestMountStart/serial/StartWithMountFirst 10.26
200 TestMultiNode/serial/FreshStart2Nodes 9.87
201 TestMultiNode/serial/DeployApp2Nodes 69.89
202 TestMultiNode/serial/PingHostFrom2Pods 0.1
203 TestMultiNode/serial/AddNode 0.08
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.09
206 TestMultiNode/serial/CopyFile 0.07
207 TestMultiNode/serial/StopNode 0.15
208 TestMultiNode/serial/StartAfterStop 55.04
209 TestMultiNode/serial/RestartKeepsNodes 7.39
210 TestMultiNode/serial/DeleteNode 0.12
211 TestMultiNode/serial/StopMultiNode 2.23
212 TestMultiNode/serial/RestartMultiNode 5.26
213 TestMultiNode/serial/ValidateNameConflict 20.13
217 TestPreload 10.25
219 TestScheduledStopUnix 9.95
220 TestSkaffold 12.29
223 TestRunningBinaryUpgrade 586.47
225 TestKubernetesUpgrade 17.14
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.03
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 0.98
241 TestStoppedBinaryUpgrade/Upgrade 572.7
243 TestPause/serial/Start 10.05
253 TestNoKubernetes/serial/StartWithK8s 10.02
254 TestNoKubernetes/serial/StartWithStopK8s 5.33
255 TestNoKubernetes/serial/Start 5.32
259 TestNoKubernetes/serial/StartNoArgs 5.34
261 TestNetworkPlugins/group/auto/Start 9.77
262 TestNetworkPlugins/group/kindnet/Start 10.1
263 TestNetworkPlugins/group/calico/Start 9.77
264 TestNetworkPlugins/group/custom-flannel/Start 9.93
265 TestNetworkPlugins/group/false/Start 9.72
266 TestNetworkPlugins/group/enable-default-cni/Start 9.92
267 TestNetworkPlugins/group/flannel/Start 9.84
268 TestNetworkPlugins/group/bridge/Start 9.88
269 TestNetworkPlugins/group/kubenet/Start 9.92
271 TestStartStop/group/old-k8s-version/serial/FirstStart 9.9
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.28
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.04
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
281 TestStartStop/group/old-k8s-version/serial/Pause 0.12
283 TestStartStop/group/no-preload/serial/FirstStart 10.16
284 TestStartStop/group/no-preload/serial/DeployApp 0.1
285 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
288 TestStartStop/group/no-preload/serial/SecondStart 5.28
290 TestStartStop/group/embed-certs/serial/FirstStart 10.03
291 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
292 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
293 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
294 TestStartStop/group/no-preload/serial/Pause 0.11
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.88
297 TestStartStop/group/embed-certs/serial/DeployApp 0.1
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
304 TestStartStop/group/embed-certs/serial/SecondStart 5.26
306 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.27
307 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
308 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
309 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
310 TestStartStop/group/embed-certs/serial/Pause 0.11
312 TestStartStop/group/newest-cni/serial/FirstStart 10
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
314 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
315 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
316 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
321 TestStartStop/group/newest-cni/serial/SecondStart 5.27
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
325 TestStartStop/group/newest-cni/serial/Pause 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (13.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-131000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-131000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (13.927252666s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"478754cb-1e9d-407c-b472-02b9c6cb47e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-131000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d0b677a3-09ee-494d-ab98-1749663bc12e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19875"}}
	{"specversion":"1.0","id":"4b1599cc-16e7-4197-bab0-041488138ec8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig"}}
	{"specversion":"1.0","id":"4f4c4bda-2413-4582-a61c-a81b141c0a63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"4ca0c053-ad0a-4237-a41d-14e9dfb339f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"926525ec-6e24-4bf7-9d3c-ec53fb936b2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube"}}
	{"specversion":"1.0","id":"40f74824-fe3a-49fa-bd59-61b33319a322","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"59de03fc-4122-41e2-8d5b-5913683dca6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"99623a23-4097-4d68-b48e-321f901877d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"9d93dcc5-2570-49e5-a70c-773042aab57d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"bbbddc80-7b3b-4d7c-a029-dbcdf794b411","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-131000\" primary control-plane node in \"download-only-131000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"631eb6ea-8b7f-437c-9b7c-ff0fe0037aa0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9fa7e2d2-e5c1-42d0-8a80-fe12d41972f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x105e1d320 0x105e1d320 0x105e1d320 0x105e1d320 0x105e1d320 0x105e1d320 0x105e1d320] Decompressors:map[bz2:0x1400012de70 gz:0x1400012de78 tar:0x1400012de20 tar.bz2:0x1400012de30 tar.gz:0x1400012de40 tar.xz:0x1400012de50 tar.zst:0x1400012de60 tbz2:0x1400012de30 tgz:0x14
00012de40 txz:0x1400012de50 tzst:0x1400012de60 xz:0x1400012de80 zip:0x1400012de90 zst:0x1400012de88] Getters:map[file:0x140004326d0 http:0x14000a47180 https:0x14000a471d0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"96bcc238-80b6-4874-986b-ceea3f5085df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:54:00.740121    7453 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:54:00.740288    7453 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:54:00.740291    7453 out.go:358] Setting ErrFile to fd 2...
	I1028 04:54:00.740294    7453 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:54:00.740421    7453 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	W1028 04:54:00.740522    7453 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19875-6942/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19875-6942/.minikube/config/config.json: no such file or directory
	I1028 04:54:00.741913    7453 out.go:352] Setting JSON to true
	I1028 04:54:00.759862    7453 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5011,"bootTime":1730111429,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:54:00.759970    7453 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:54:00.764936    7453 out.go:97] [download-only-131000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:54:00.765056    7453 notify.go:220] Checking for updates...
	W1028 04:54:00.765147    7453 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball: no such file or directory
	I1028 04:54:00.768740    7453 out.go:169] MINIKUBE_LOCATION=19875
	I1028 04:54:00.771801    7453 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 04:54:00.775828    7453 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:54:00.778768    7453 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:54:00.781836    7453 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	W1028 04:54:00.787755    7453 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1028 04:54:00.788030    7453 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:54:00.790706    7453 out.go:97] Using the qemu2 driver based on user configuration
	I1028 04:54:00.790728    7453 start.go:297] selected driver: qemu2
	I1028 04:54:00.790742    7453 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:54:00.790803    7453 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 04:54:00.793816    7453 out.go:169] Automatically selected the socket_vmnet network
	I1028 04:54:00.799318    7453 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1028 04:54:00.799417    7453 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 04:54:00.799464    7453 cni.go:84] Creating CNI manager for ""
	I1028 04:54:00.799521    7453 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1028 04:54:00.799576    7453 start.go:340] cluster config:
	{Name:download-only-131000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-131000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:54:00.804459    7453 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:54:00.807909    7453 out.go:97] Downloading VM boot image ...
	I1028 04:54:00.807928    7453 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso
	I1028 04:54:06.476339    7453 out.go:97] Starting "download-only-131000" primary control-plane node in "download-only-131000" cluster
	I1028 04:54:06.476377    7453 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1028 04:54:06.534173    7453 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1028 04:54:06.534196    7453 cache.go:56] Caching tarball of preloaded images
	I1028 04:54:06.534388    7453 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1028 04:54:06.538523    7453 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1028 04:54:06.538530    7453 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1028 04:54:06.619060    7453 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1028 04:54:13.245145    7453 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1028 04:54:13.245340    7453 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1028 04:54:13.940270    7453 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1028 04:54:13.940488    7453 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/download-only-131000/config.json ...
	I1028 04:54:13.940508    7453 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/download-only-131000/config.json: {Name:mk4445da67b8d452a34b26b9974bcb6d4ac2b382 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:54:13.940793    7453 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1028 04:54:13.941037    7453 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1028 04:54:14.584862    7453 out.go:193] 
	W1028 04:54:14.588408    7453 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x105e1d320 0x105e1d320 0x105e1d320 0x105e1d320 0x105e1d320 0x105e1d320 0x105e1d320] Decompressors:map[bz2:0x1400012de70 gz:0x1400012de78 tar:0x1400012de20 tar.bz2:0x1400012de30 tar.gz:0x1400012de40 tar.xz:0x1400012de50 tar.zst:0x1400012de60 tbz2:0x1400012de30 tgz:0x1400012de40 txz:0x1400012de50 tzst:0x1400012de60 xz:0x1400012de80 zip:0x1400012de90 zst:0x1400012de88] Getters:map[file:0x140004326d0 http:0x14000a47180 https:0x14000a471d0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1028 04:54:14.588436    7453 out_reason.go:110] 
	W1028 04:54:14.596917    7453 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:54:14.600884    7453 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-131000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (13.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.9s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-647000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-647000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.740989875s)

                                                
                                                
-- stdout --
	* [offline-docker-647000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-647000" primary control-plane node in "offline-docker-647000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-647000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:05:01.755542    9081 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:05:01.755717    9081 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:05:01.755721    9081 out.go:358] Setting ErrFile to fd 2...
	I1028 05:05:01.755724    9081 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:05:01.755858    9081 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:05:01.756975    9081 out.go:352] Setting JSON to false
	I1028 05:05:01.776052    9081 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5672,"bootTime":1730111429,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:05:01.776128    9081 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:05:01.781566    9081 out.go:177] * [offline-docker-647000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:05:01.789596    9081 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:05:01.789613    9081 notify.go:220] Checking for updates...
	I1028 05:05:01.797583    9081 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:05:01.800506    9081 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:05:01.803545    9081 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:05:01.806573    9081 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:05:01.809559    9081 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:05:01.812937    9081 config.go:182] Loaded profile config "multinode-268000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:05:01.813005    9081 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:05:01.817547    9081 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 05:05:01.824529    9081 start.go:297] selected driver: qemu2
	I1028 05:05:01.824537    9081 start.go:901] validating driver "qemu2" against <nil>
	I1028 05:05:01.824544    9081 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:05:01.826756    9081 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 05:05:01.829596    9081 out.go:177] * Automatically selected the socket_vmnet network
	I1028 05:05:01.832587    9081 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 05:05:01.832607    9081 cni.go:84] Creating CNI manager for ""
	I1028 05:05:01.832628    9081 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 05:05:01.832632    9081 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 05:05:01.832677    9081 start.go:340] cluster config:
	{Name:offline-docker-647000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-647000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:05:01.837364    9081 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:05:01.845588    9081 out.go:177] * Starting "offline-docker-647000" primary control-plane node in "offline-docker-647000" cluster
	I1028 05:05:01.849567    9081 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:05:01.849601    9081 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 05:05:01.849609    9081 cache.go:56] Caching tarball of preloaded images
	I1028 05:05:01.849699    9081 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:05:01.849705    9081 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 05:05:01.849770    9081 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/offline-docker-647000/config.json ...
	I1028 05:05:01.849780    9081 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/offline-docker-647000/config.json: {Name:mkb4d4114f585c88968f30ffe9a01ab4123c29b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:05:01.850075    9081 start.go:360] acquireMachinesLock for offline-docker-647000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:05:01.850118    9081 start.go:364] duration metric: took 36.666µs to acquireMachinesLock for "offline-docker-647000"
	I1028 05:05:01.850129    9081 start.go:93] Provisioning new machine with config: &{Name:offline-docker-647000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-647000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:05:01.850156    9081 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:05:01.855445    9081 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1028 05:05:01.871081    9081 start.go:159] libmachine.API.Create for "offline-docker-647000" (driver="qemu2")
	I1028 05:05:01.871149    9081 client.go:168] LocalClient.Create starting
	I1028 05:05:01.871236    9081 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:05:01.871272    9081 main.go:141] libmachine: Decoding PEM data...
	I1028 05:05:01.871285    9081 main.go:141] libmachine: Parsing certificate...
	I1028 05:05:01.871323    9081 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:05:01.871352    9081 main.go:141] libmachine: Decoding PEM data...
	I1028 05:05:01.871365    9081 main.go:141] libmachine: Parsing certificate...
	I1028 05:05:01.871747    9081 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:05:02.031706    9081 main.go:141] libmachine: Creating SSH key...
	I1028 05:05:02.075191    9081 main.go:141] libmachine: Creating Disk image...
	I1028 05:05:02.075202    9081 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:05:02.075829    9081 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/offline-docker-647000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/offline-docker-647000/disk.qcow2
	I1028 05:05:02.086112    9081 main.go:141] libmachine: STDOUT: 
	I1028 05:05:02.086139    9081 main.go:141] libmachine: STDERR: 
	I1028 05:05:02.086203    9081 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/offline-docker-647000/disk.qcow2 +20000M
	I1028 05:05:02.096224    9081 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:05:02.096252    9081 main.go:141] libmachine: STDERR: 
	I1028 05:05:02.096272    9081 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/offline-docker-647000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/offline-docker-647000/disk.qcow2
	I1028 05:05:02.096277    9081 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:05:02.096288    9081 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:05:02.096326    9081 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/offline-docker-647000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/offline-docker-647000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/offline-docker-647000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:66:9d:4d:af:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/offline-docker-647000/disk.qcow2
	I1028 05:05:02.098193    9081 main.go:141] libmachine: STDOUT: 
	I1028 05:05:02.098211    9081 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:05:02.098232    9081 client.go:171] duration metric: took 227.082167ms to LocalClient.Create
	I1028 05:05:04.098259    9081 start.go:128] duration metric: took 2.248146542s to createHost
	I1028 05:05:04.098276    9081 start.go:83] releasing machines lock for "offline-docker-647000", held for 2.248203084s
	W1028 05:05:04.098284    9081 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:05:04.105296    9081 out.go:177] * Deleting "offline-docker-647000" in qemu2 ...
	W1028 05:05:04.115882    9081 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:05:04.115894    9081 start.go:729] Will try again in 5 seconds ...
	I1028 05:05:09.118026    9081 start.go:360] acquireMachinesLock for offline-docker-647000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:05:09.118686    9081 start.go:364] duration metric: took 557.292µs to acquireMachinesLock for "offline-docker-647000"
	I1028 05:05:09.118828    9081 start.go:93] Provisioning new machine with config: &{Name:offline-docker-647000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-647000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:05:09.119106    9081 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:05:09.130840    9081 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1028 05:05:09.180114    9081 start.go:159] libmachine.API.Create for "offline-docker-647000" (driver="qemu2")
	I1028 05:05:09.180178    9081 client.go:168] LocalClient.Create starting
	I1028 05:05:09.180336    9081 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:05:09.180410    9081 main.go:141] libmachine: Decoding PEM data...
	I1028 05:05:09.180429    9081 main.go:141] libmachine: Parsing certificate...
	I1028 05:05:09.180515    9081 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:05:09.180571    9081 main.go:141] libmachine: Decoding PEM data...
	I1028 05:05:09.180584    9081 main.go:141] libmachine: Parsing certificate...
	I1028 05:05:09.181238    9081 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:05:09.352498    9081 main.go:141] libmachine: Creating SSH key...
	I1028 05:05:09.397570    9081 main.go:141] libmachine: Creating Disk image...
	I1028 05:05:09.397581    9081 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:05:09.397792    9081 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/offline-docker-647000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/offline-docker-647000/disk.qcow2
	I1028 05:05:09.408276    9081 main.go:141] libmachine: STDOUT: 
	I1028 05:05:09.408314    9081 main.go:141] libmachine: STDERR: 
	I1028 05:05:09.408393    9081 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/offline-docker-647000/disk.qcow2 +20000M
	I1028 05:05:09.417408    9081 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:05:09.417427    9081 main.go:141] libmachine: STDERR: 
	I1028 05:05:09.417448    9081 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/offline-docker-647000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/offline-docker-647000/disk.qcow2
	I1028 05:05:09.417453    9081 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:05:09.417463    9081 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:05:09.417522    9081 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/offline-docker-647000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/offline-docker-647000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/offline-docker-647000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:d8:1e:67:a4:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/offline-docker-647000/disk.qcow2
	I1028 05:05:09.419344    9081 main.go:141] libmachine: STDOUT: 
	I1028 05:05:09.419372    9081 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:05:09.419384    9081 client.go:171] duration metric: took 239.206417ms to LocalClient.Create
	I1028 05:05:11.419916    9081 start.go:128] duration metric: took 2.300809167s to createHost
	I1028 05:05:11.419992    9081 start.go:83] releasing machines lock for "offline-docker-647000", held for 2.301331458s
	W1028 05:05:11.420310    9081 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-647000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-647000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:05:11.434904    9081 out.go:201] 
	W1028 05:05:11.438961    9081 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:05:11.438996    9081 out.go:270] * 
	* 
	W1028 05:05:11.441244    9081 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:05:11.450899    9081 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-647000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-10-28 05:05:11.463058 -0700 PDT m=+670.901347959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-647000 -n offline-docker-647000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-647000 -n offline-docker-647000: exit status 7 (74.472708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-647000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-647000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-647000
--- FAIL: TestOffline (9.90s)

                                                
                                    
x
+
TestAddons/Setup (10.3s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-578000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-578000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (10.294776458s)

                                                
                                                
-- stdout --
	* [addons-578000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-578000" primary control-plane node in "addons-578000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-578000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:54:23.876229    7530 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:54:23.876375    7530 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:54:23.876378    7530 out.go:358] Setting ErrFile to fd 2...
	I1028 04:54:23.876381    7530 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:54:23.876533    7530 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 04:54:23.877702    7530 out.go:352] Setting JSON to false
	I1028 04:54:23.895261    7530 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5034,"bootTime":1730111429,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:54:23.895341    7530 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:54:23.900183    7530 out.go:177] * [addons-578000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:54:23.907153    7530 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 04:54:23.907205    7530 notify.go:220] Checking for updates...
	I1028 04:54:23.914139    7530 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 04:54:23.917145    7530 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:54:23.920128    7530 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:54:23.923193    7530 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 04:54:23.926128    7530 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:54:23.929365    7530 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:54:23.933095    7530 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 04:54:23.940127    7530 start.go:297] selected driver: qemu2
	I1028 04:54:23.940134    7530 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:54:23.940141    7530 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:54:23.942724    7530 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 04:54:23.946185    7530 out.go:177] * Automatically selected the socket_vmnet network
	I1028 04:54:23.947682    7530 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:54:23.947707    7530 cni.go:84] Creating CNI manager for ""
	I1028 04:54:23.947738    7530 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:54:23.947742    7530 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 04:54:23.947775    7530 start.go:340] cluster config:
	{Name:addons-578000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-578000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:54:23.952522    7530 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:54:23.959170    7530 out.go:177] * Starting "addons-578000" primary control-plane node in "addons-578000" cluster
	I1028 04:54:23.963196    7530 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:54:23.963215    7530 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:54:23.963225    7530 cache.go:56] Caching tarball of preloaded images
	I1028 04:54:23.963307    7530 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:54:23.963315    7530 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:54:23.963521    7530 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/addons-578000/config.json ...
	I1028 04:54:23.963533    7530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/addons-578000/config.json: {Name:mk6da868b727f7e2a30addbc6cedec7139e9b4a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:54:23.963888    7530 start.go:360] acquireMachinesLock for addons-578000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:54:23.963977    7530 start.go:364] duration metric: took 82.875µs to acquireMachinesLock for "addons-578000"
	I1028 04:54:23.963988    7530 start.go:93] Provisioning new machine with config: &{Name:addons-578000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:addons-578000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:54:23.964018    7530 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:54:23.971061    7530 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1028 04:54:23.987625    7530 start.go:159] libmachine.API.Create for "addons-578000" (driver="qemu2")
	I1028 04:54:23.987674    7530 client.go:168] LocalClient.Create starting
	I1028 04:54:23.987810    7530 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 04:54:24.101051    7530 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 04:54:24.301699    7530 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:54:24.463478    7530 main.go:141] libmachine: Creating SSH key...
	I1028 04:54:24.634579    7530 main.go:141] libmachine: Creating Disk image...
	I1028 04:54:24.634590    7530 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:54:24.634816    7530 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/addons-578000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/addons-578000/disk.qcow2
	I1028 04:54:24.644815    7530 main.go:141] libmachine: STDOUT: 
	I1028 04:54:24.644845    7530 main.go:141] libmachine: STDERR: 
	I1028 04:54:24.644912    7530 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/addons-578000/disk.qcow2 +20000M
	I1028 04:54:24.653540    7530 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:54:24.653556    7530 main.go:141] libmachine: STDERR: 
	I1028 04:54:24.653570    7530 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/addons-578000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/addons-578000/disk.qcow2
	I1028 04:54:24.653578    7530 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:54:24.653618    7530 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:54:24.653648    7530 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/addons-578000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/addons-578000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/addons-578000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:01:24:c6:5a:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/addons-578000/disk.qcow2
	I1028 04:54:24.655449    7530 main.go:141] libmachine: STDOUT: 
	I1028 04:54:24.655465    7530 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:54:24.655493    7530 client.go:171] duration metric: took 667.801708ms to LocalClient.Create
	I1028 04:54:26.657678    7530 start.go:128] duration metric: took 2.693627125s to createHost
	I1028 04:54:26.657738    7530 start.go:83] releasing machines lock for "addons-578000", held for 2.693739s
	W1028 04:54:26.657832    7530 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:54:26.673023    7530 out.go:177] * Deleting "addons-578000" in qemu2 ...
	W1028 04:54:26.699717    7530 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:54:26.699745    7530 start.go:729] Will try again in 5 seconds ...
	I1028 04:54:31.701969    7530 start.go:360] acquireMachinesLock for addons-578000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:54:31.702487    7530 start.go:364] duration metric: took 425.666µs to acquireMachinesLock for "addons-578000"
	I1028 04:54:31.702586    7530 start.go:93] Provisioning new machine with config: &{Name:addons-578000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:addons-578000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:54:31.702842    7530 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:54:31.719723    7530 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1028 04:54:31.772621    7530 start.go:159] libmachine.API.Create for "addons-578000" (driver="qemu2")
	I1028 04:54:31.772677    7530 client.go:168] LocalClient.Create starting
	I1028 04:54:31.772830    7530 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 04:54:31.772911    7530 main.go:141] libmachine: Decoding PEM data...
	I1028 04:54:31.772935    7530 main.go:141] libmachine: Parsing certificate...
	I1028 04:54:31.773021    7530 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 04:54:31.773079    7530 main.go:141] libmachine: Decoding PEM data...
	I1028 04:54:31.773096    7530 main.go:141] libmachine: Parsing certificate...
	I1028 04:54:31.773876    7530 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:54:31.943948    7530 main.go:141] libmachine: Creating SSH key...
	I1028 04:54:32.070756    7530 main.go:141] libmachine: Creating Disk image...
	I1028 04:54:32.070762    7530 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:54:32.070966    7530 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/addons-578000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/addons-578000/disk.qcow2
	I1028 04:54:32.081048    7530 main.go:141] libmachine: STDOUT: 
	I1028 04:54:32.081065    7530 main.go:141] libmachine: STDERR: 
	I1028 04:54:32.081132    7530 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/addons-578000/disk.qcow2 +20000M
	I1028 04:54:32.089552    7530 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:54:32.089572    7530 main.go:141] libmachine: STDERR: 
	I1028 04:54:32.089586    7530 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/addons-578000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/addons-578000/disk.qcow2
	I1028 04:54:32.089590    7530 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:54:32.089599    7530 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:54:32.089635    7530 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/addons-578000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/addons-578000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/addons-578000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:e4:97:d3:d3:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/addons-578000/disk.qcow2
	I1028 04:54:32.091380    7530 main.go:141] libmachine: STDOUT: 
	I1028 04:54:32.091394    7530 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:54:32.091408    7530 client.go:171] duration metric: took 318.722458ms to LocalClient.Create
	I1028 04:54:34.093544    7530 start.go:128] duration metric: took 2.390664042s to createHost
	I1028 04:54:34.093584    7530 start.go:83] releasing machines lock for "addons-578000", held for 2.391063666s
	W1028 04:54:34.093935    7530 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p addons-578000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-578000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:54:34.107400    7530 out.go:201] 
	W1028 04:54:34.111607    7530 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:54:34.111635    7530 out.go:270] * 
	* 
	W1028 04:54:34.114068    7530 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:54:34.123492    7530 out.go:201] 

                                                
                                                
** /stderr **
addons_test.go:109: out/minikube-darwin-arm64 start -p addons-578000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (10.30s)

                                                
                                    
x
+
TestCertOptions (10.16s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-736000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-736000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.878289916s)

                                                
                                                
-- stdout --
	* [cert-options-736000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-736000" primary control-plane node in "cert-options-736000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-736000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-736000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-736000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-736000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (86.445875ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-736000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-736000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-736000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-736000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-736000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-736000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (47.392167ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-736000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-736000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-736000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-736000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-736000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-10-28 05:05:42.764515 -0700 PDT m=+702.203489501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-736000 -n cert-options-736000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-736000 -n cert-options-736000: exit status 7 (34.88075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-736000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-736000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-736000
--- FAIL: TestCertOptions (10.16s)

                                                
                                    
x
+
TestCertExpiration (195.23s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-512000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-512000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.84166325s)

                                                
                                                
-- stdout --
	* [cert-expiration-512000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-512000" primary control-plane node in "cert-expiration-512000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-512000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-512000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-512000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-512000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-512000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.236707333s)

                                                
                                                
-- stdout --
	* [cert-expiration-512000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-512000" primary control-plane node in "cert-expiration-512000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-512000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-512000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-512000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-512000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-512000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-512000" primary control-plane node in "cert-expiration-512000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-512000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-512000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-512000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-10-28 05:08:42.743528 -0700 PDT m=+882.186436584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-512000 -n cert-expiration-512000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-512000 -n cert-expiration-512000: exit status 7 (65.880833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-512000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-512000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-512000
--- FAIL: TestCertExpiration (195.23s)

                                                
                                    
x
+
TestDockerFlags (10.12s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-624000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-624000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.861884792s)

                                                
                                                
-- stdout --
	* [docker-flags-624000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-624000" primary control-plane node in "docker-flags-624000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-624000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:05:22.637321    9268 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:05:22.637471    9268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:05:22.637474    9268 out.go:358] Setting ErrFile to fd 2...
	I1028 05:05:22.637476    9268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:05:22.637607    9268 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:05:22.638793    9268 out.go:352] Setting JSON to false
	I1028 05:05:22.656532    9268 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5693,"bootTime":1730111429,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:05:22.656607    9268 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:05:22.663695    9268 out.go:177] * [docker-flags-624000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:05:22.670685    9268 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:05:22.670735    9268 notify.go:220] Checking for updates...
	I1028 05:05:22.678603    9268 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:05:22.681639    9268 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:05:22.684570    9268 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:05:22.687607    9268 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:05:22.695624    9268 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:05:22.699026    9268 config.go:182] Loaded profile config "force-systemd-flag-219000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:05:22.699121    9268 config.go:182] Loaded profile config "multinode-268000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:05:22.699193    9268 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:05:22.703574    9268 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 05:05:22.710561    9268 start.go:297] selected driver: qemu2
	I1028 05:05:22.710567    9268 start.go:901] validating driver "qemu2" against <nil>
	I1028 05:05:22.710572    9268 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:05:22.713062    9268 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 05:05:22.716565    9268 out.go:177] * Automatically selected the socket_vmnet network
	I1028 05:05:22.719728    9268 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1028 05:05:22.719760    9268 cni.go:84] Creating CNI manager for ""
	I1028 05:05:22.719795    9268 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 05:05:22.719812    9268 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 05:05:22.719836    9268 start.go:340] cluster config:
	{Name:docker-flags-624000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-624000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:05:22.724925    9268 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:05:22.733609    9268 out.go:177] * Starting "docker-flags-624000" primary control-plane node in "docker-flags-624000" cluster
	I1028 05:05:22.736555    9268 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:05:22.736574    9268 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 05:05:22.736585    9268 cache.go:56] Caching tarball of preloaded images
	I1028 05:05:22.736673    9268 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:05:22.736679    9268 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 05:05:22.736744    9268 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/docker-flags-624000/config.json ...
	I1028 05:05:22.736755    9268 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/docker-flags-624000/config.json: {Name:mk209c917b03aeaa1e9685cb1e891e9e1c4f68c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:05:22.737112    9268 start.go:360] acquireMachinesLock for docker-flags-624000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:05:22.737164    9268 start.go:364] duration metric: took 45.333µs to acquireMachinesLock for "docker-flags-624000"
	I1028 05:05:22.737176    9268 start.go:93] Provisioning new machine with config: &{Name:docker-flags-624000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-624000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:05:22.737203    9268 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:05:22.741666    9268 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1028 05:05:22.760100    9268 start.go:159] libmachine.API.Create for "docker-flags-624000" (driver="qemu2")
	I1028 05:05:22.760135    9268 client.go:168] LocalClient.Create starting
	I1028 05:05:22.760220    9268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:05:22.760258    9268 main.go:141] libmachine: Decoding PEM data...
	I1028 05:05:22.760270    9268 main.go:141] libmachine: Parsing certificate...
	I1028 05:05:22.760309    9268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:05:22.760344    9268 main.go:141] libmachine: Decoding PEM data...
	I1028 05:05:22.760350    9268 main.go:141] libmachine: Parsing certificate...
	I1028 05:05:22.760757    9268 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:05:22.918358    9268 main.go:141] libmachine: Creating SSH key...
	I1028 05:05:23.005373    9268 main.go:141] libmachine: Creating Disk image...
	I1028 05:05:23.005379    9268 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:05:23.005554    9268 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/docker-flags-624000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/docker-flags-624000/disk.qcow2
	I1028 05:05:23.015381    9268 main.go:141] libmachine: STDOUT: 
	I1028 05:05:23.015402    9268 main.go:141] libmachine: STDERR: 
	I1028 05:05:23.015451    9268 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/docker-flags-624000/disk.qcow2 +20000M
	I1028 05:05:23.023843    9268 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:05:23.023857    9268 main.go:141] libmachine: STDERR: 
	I1028 05:05:23.023873    9268 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/docker-flags-624000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/docker-flags-624000/disk.qcow2
	I1028 05:05:23.023877    9268 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:05:23.023889    9268 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:05:23.023919    9268 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/docker-flags-624000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/docker-flags-624000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/docker-flags-624000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:73:10:95:30:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/docker-flags-624000/disk.qcow2
	I1028 05:05:23.025673    9268 main.go:141] libmachine: STDOUT: 
	I1028 05:05:23.025686    9268 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:05:23.025705    9268 client.go:171] duration metric: took 265.570625ms to LocalClient.Create
	I1028 05:05:25.027826    9268 start.go:128] duration metric: took 2.290656125s to createHost
	I1028 05:05:25.027880    9268 start.go:83] releasing machines lock for "docker-flags-624000", held for 2.290745541s
	W1028 05:05:25.027963    9268 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:05:25.052983    9268 out.go:177] * Deleting "docker-flags-624000" in qemu2 ...
	W1028 05:05:25.076075    9268 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:05:25.076097    9268 start.go:729] Will try again in 5 seconds ...
	I1028 05:05:30.078237    9268 start.go:360] acquireMachinesLock for docker-flags-624000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:05:30.082391    9268 start.go:364] duration metric: took 3.951167ms to acquireMachinesLock for "docker-flags-624000"
	I1028 05:05:30.082542    9268 start.go:93] Provisioning new machine with config: &{Name:docker-flags-624000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-624000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:05:30.082819    9268 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:05:30.096520    9268 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1028 05:05:30.144650    9268 start.go:159] libmachine.API.Create for "docker-flags-624000" (driver="qemu2")
	I1028 05:05:30.144694    9268 client.go:168] LocalClient.Create starting
	I1028 05:05:30.144824    9268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:05:30.144906    9268 main.go:141] libmachine: Decoding PEM data...
	I1028 05:05:30.144923    9268 main.go:141] libmachine: Parsing certificate...
	I1028 05:05:30.144982    9268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:05:30.145043    9268 main.go:141] libmachine: Decoding PEM data...
	I1028 05:05:30.145053    9268 main.go:141] libmachine: Parsing certificate...
	I1028 05:05:30.145713    9268 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:05:30.318639    9268 main.go:141] libmachine: Creating SSH key...
	I1028 05:05:30.387383    9268 main.go:141] libmachine: Creating Disk image...
	I1028 05:05:30.387389    9268 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:05:30.387589    9268 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/docker-flags-624000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/docker-flags-624000/disk.qcow2
	I1028 05:05:30.397954    9268 main.go:141] libmachine: STDOUT: 
	I1028 05:05:30.397976    9268 main.go:141] libmachine: STDERR: 
	I1028 05:05:30.398032    9268 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/docker-flags-624000/disk.qcow2 +20000M
	I1028 05:05:30.406548    9268 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:05:30.406563    9268 main.go:141] libmachine: STDERR: 
	I1028 05:05:30.406583    9268 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/docker-flags-624000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/docker-flags-624000/disk.qcow2
	I1028 05:05:30.406589    9268 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:05:30.406597    9268 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:05:30.406637    9268 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/docker-flags-624000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/docker-flags-624000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/docker-flags-624000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:53:9d:c1:61:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/docker-flags-624000/disk.qcow2
	I1028 05:05:30.408480    9268 main.go:141] libmachine: STDOUT: 
	I1028 05:05:30.408495    9268 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:05:30.408507    9268 client.go:171] duration metric: took 263.814083ms to LocalClient.Create
	I1028 05:05:32.410639    9268 start.go:128] duration metric: took 2.327836834s to createHost
	I1028 05:05:32.410701    9268 start.go:83] releasing machines lock for "docker-flags-624000", held for 2.328332s
	W1028 05:05:32.411100    9268 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-624000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-624000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:05:32.424741    9268 out.go:201] 
	W1028 05:05:32.437022    9268 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:05:32.437057    9268 out.go:270] * 
	* 
	W1028 05:05:32.439362    9268 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:05:32.451716    9268 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-624000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-624000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-624000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (90.3875ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-624000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-624000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-624000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-624000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-624000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-624000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-624000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-624000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-624000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (46.635709ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-624000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-624000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-624000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-624000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-624000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-624000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-10-28 05:05:32.606593 -0700 PDT m=+692.045344542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-624000 -n docker-flags-624000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-624000 -n docker-flags-624000: exit status 7 (33.962792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-624000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-624000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-624000
--- FAIL: TestDockerFlags (10.12s)

                                                
                                    
x
+
TestForceSystemdFlag (10.3s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-219000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-219000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.099247958s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-219000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-219000" primary control-plane node in "force-systemd-flag-219000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-219000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:05:17.392071    9247 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:05:17.392222    9247 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:05:17.392226    9247 out.go:358] Setting ErrFile to fd 2...
	I1028 05:05:17.392228    9247 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:05:17.392352    9247 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:05:17.393516    9247 out.go:352] Setting JSON to false
	I1028 05:05:17.410994    9247 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5688,"bootTime":1730111429,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:05:17.411064    9247 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:05:17.418450    9247 out.go:177] * [force-systemd-flag-219000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:05:17.434494    9247 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:05:17.434517    9247 notify.go:220] Checking for updates...
	I1028 05:05:17.444421    9247 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:05:17.448469    9247 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:05:17.451369    9247 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:05:17.454466    9247 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:05:17.457442    9247 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:05:17.460756    9247 config.go:182] Loaded profile config "force-systemd-env-564000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:05:17.460857    9247 config.go:182] Loaded profile config "multinode-268000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:05:17.460906    9247 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:05:17.465425    9247 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 05:05:17.472369    9247 start.go:297] selected driver: qemu2
	I1028 05:05:17.472375    9247 start.go:901] validating driver "qemu2" against <nil>
	I1028 05:05:17.472381    9247 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:05:17.475038    9247 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 05:05:17.478385    9247 out.go:177] * Automatically selected the socket_vmnet network
	I1028 05:05:17.481494    9247 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 05:05:17.481510    9247 cni.go:84] Creating CNI manager for ""
	I1028 05:05:17.481533    9247 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 05:05:17.481539    9247 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 05:05:17.481576    9247 start.go:340] cluster config:
	{Name:force-systemd-flag-219000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-219000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:05:17.486504    9247 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:05:17.494416    9247 out.go:177] * Starting "force-systemd-flag-219000" primary control-plane node in "force-systemd-flag-219000" cluster
	I1028 05:05:17.498292    9247 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:05:17.498311    9247 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 05:05:17.498326    9247 cache.go:56] Caching tarball of preloaded images
	I1028 05:05:17.498427    9247 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:05:17.498433    9247 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 05:05:17.498506    9247 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/force-systemd-flag-219000/config.json ...
	I1028 05:05:17.498518    9247 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/force-systemd-flag-219000/config.json: {Name:mk35013a9c570722b6384b33d07430c5d968fddb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:05:17.498895    9247 start.go:360] acquireMachinesLock for force-systemd-flag-219000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:05:17.498952    9247 start.go:364] duration metric: took 48.959µs to acquireMachinesLock for "force-systemd-flag-219000"
	I1028 05:05:17.498965    9247 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-219000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-219000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:05:17.498997    9247 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:05:17.507237    9247 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1028 05:05:17.526164    9247 start.go:159] libmachine.API.Create for "force-systemd-flag-219000" (driver="qemu2")
	I1028 05:05:17.526195    9247 client.go:168] LocalClient.Create starting
	I1028 05:05:17.526279    9247 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:05:17.526322    9247 main.go:141] libmachine: Decoding PEM data...
	I1028 05:05:17.526340    9247 main.go:141] libmachine: Parsing certificate...
	I1028 05:05:17.526381    9247 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:05:17.526414    9247 main.go:141] libmachine: Decoding PEM data...
	I1028 05:05:17.526423    9247 main.go:141] libmachine: Parsing certificate...
	I1028 05:05:17.526867    9247 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:05:17.684097    9247 main.go:141] libmachine: Creating SSH key...
	I1028 05:05:17.866688    9247 main.go:141] libmachine: Creating Disk image...
	I1028 05:05:17.866700    9247 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:05:17.866906    9247 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-flag-219000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-flag-219000/disk.qcow2
	I1028 05:05:17.877277    9247 main.go:141] libmachine: STDOUT: 
	I1028 05:05:17.877297    9247 main.go:141] libmachine: STDERR: 
	I1028 05:05:17.877362    9247 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-flag-219000/disk.qcow2 +20000M
	I1028 05:05:17.885787    9247 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:05:17.885807    9247 main.go:141] libmachine: STDERR: 
	I1028 05:05:17.885828    9247 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-flag-219000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-flag-219000/disk.qcow2
	I1028 05:05:17.885835    9247 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:05:17.885847    9247 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:05:17.885882    9247 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-flag-219000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-flag-219000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-flag-219000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:16:bc:5f:4e:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-flag-219000/disk.qcow2
	I1028 05:05:17.887664    9247 main.go:141] libmachine: STDOUT: 
	I1028 05:05:17.887690    9247 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:05:17.887712    9247 client.go:171] duration metric: took 361.51875ms to LocalClient.Create
	I1028 05:05:19.889843    9247 start.go:128] duration metric: took 2.390878625s to createHost
	I1028 05:05:19.889960    9247 start.go:83] releasing machines lock for "force-systemd-flag-219000", held for 2.391003083s
	W1028 05:05:19.890020    9247 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:05:19.916148    9247 out.go:177] * Deleting "force-systemd-flag-219000" in qemu2 ...
	W1028 05:05:19.938780    9247 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:05:19.938796    9247 start.go:729] Will try again in 5 seconds ...
	I1028 05:05:24.940845    9247 start.go:360] acquireMachinesLock for force-systemd-flag-219000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:05:25.028019    9247 start.go:364] duration metric: took 87.069042ms to acquireMachinesLock for "force-systemd-flag-219000"
	I1028 05:05:25.028141    9247 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-219000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-219000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:05:25.028352    9247 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:05:25.041986    9247 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1028 05:05:25.088538    9247 start.go:159] libmachine.API.Create for "force-systemd-flag-219000" (driver="qemu2")
	I1028 05:05:25.088605    9247 client.go:168] LocalClient.Create starting
	I1028 05:05:25.088743    9247 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:05:25.088829    9247 main.go:141] libmachine: Decoding PEM data...
	I1028 05:05:25.088846    9247 main.go:141] libmachine: Parsing certificate...
	I1028 05:05:25.088906    9247 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:05:25.088979    9247 main.go:141] libmachine: Decoding PEM data...
	I1028 05:05:25.088993    9247 main.go:141] libmachine: Parsing certificate...
	I1028 05:05:25.089627    9247 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:05:25.260580    9247 main.go:141] libmachine: Creating SSH key...
	I1028 05:05:25.388434    9247 main.go:141] libmachine: Creating Disk image...
	I1028 05:05:25.388442    9247 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:05:25.388627    9247 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-flag-219000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-flag-219000/disk.qcow2
	I1028 05:05:25.398736    9247 main.go:141] libmachine: STDOUT: 
	I1028 05:05:25.398760    9247 main.go:141] libmachine: STDERR: 
	I1028 05:05:25.398828    9247 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-flag-219000/disk.qcow2 +20000M
	I1028 05:05:25.407275    9247 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:05:25.407293    9247 main.go:141] libmachine: STDERR: 
	I1028 05:05:25.407305    9247 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-flag-219000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-flag-219000/disk.qcow2
	I1028 05:05:25.407311    9247 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:05:25.407324    9247 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:05:25.407355    9247 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-flag-219000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-flag-219000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-flag-219000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:51:b5:5b:e2:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-flag-219000/disk.qcow2
	I1028 05:05:25.409049    9247 main.go:141] libmachine: STDOUT: 
	I1028 05:05:25.409065    9247 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:05:25.409077    9247 client.go:171] duration metric: took 320.473458ms to LocalClient.Create
	I1028 05:05:27.411273    9247 start.go:128] duration metric: took 2.382943459s to createHost
	I1028 05:05:27.411331    9247 start.go:83] releasing machines lock for "force-systemd-flag-219000", held for 2.383341209s
	W1028 05:05:27.411659    9247 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-219000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-219000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:05:27.427673    9247 out.go:201] 
	W1028 05:05:27.434608    9247 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:05:27.434650    9247 out.go:270] * 
	* 
	W1028 05:05:27.437507    9247 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:05:27.444518    9247 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-219000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-219000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-219000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (85.863375ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-219000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-219000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-219000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-10-28 05:05:27.548593 -0700 PDT m=+686.987234251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-219000 -n force-systemd-flag-219000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-219000 -n force-systemd-flag-219000: exit status 7 (36.135125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-219000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-219000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-219000
--- FAIL: TestForceSystemdFlag (10.30s)

                                                
                                    
x
+
TestForceSystemdEnv (10.98s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-564000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I1028 05:05:12.132415    7452 install.go:79] stdout: 
W1028 05:05:12.132571    7452 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1881623798/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1881623798/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1028 05:05:12.132588    7452 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1881623798/001/docker-machine-driver-hyperkit]
I1028 05:05:12.143618    7452 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1881623798/001/docker-machine-driver-hyperkit]
I1028 05:05:12.154483    7452 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1881623798/001/docker-machine-driver-hyperkit]
I1028 05:05:12.165250    7452 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1881623798/001/docker-machine-driver-hyperkit]
I1028 05:05:12.186612    7452 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 05:05:12.186746    7452 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I1028 05:05:13.989920    7452 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1028 05:05:13.989946    7452 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1028 05:05:13.990001    7452 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1028 05:05:13.990034    7452 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1881623798/002/docker-machine-driver-hyperkit
I1028 05:05:14.384215    7452 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1881623798/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x10542a6e0 0x10542a6e0 0x10542a6e0 0x10542a6e0 0x10542a6e0 0x10542a6e0 0x10542a6e0] Decompressors:map[bz2:0x140006a7cc0 gz:0x140006a7cc8 tar:0x140006a7c30 tar.bz2:0x140006a7c50 tar.gz:0x140006a7c60 tar.xz:0x140006a7c90 tar.zst:0x140006a7ca0 tbz2:0x140006a7c50 tgz:0x140006a7c60 txz:0x140006a7c90 tzst:0x140006a7ca0 xz:0x140006a7ce0 zip:0x140006a7cf0 zst:0x140006a7ce8] Getters:map[file:0x14001552b30 http:0x140000493b0 https:0x14000049400] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1028 05:05:14.384345    7452 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1881623798/002/docker-machine-driver-hyperkit
I1028 05:05:17.308849    7452 install.go:79] stdout: 
W1028 05:05:17.309076    7452 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1881623798/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1881623798/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1028 05:05:17.309121    7452 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1881623798/002/docker-machine-driver-hyperkit]
I1028 05:05:17.325719    7452 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1881623798/002/docker-machine-driver-hyperkit]
I1028 05:05:17.338339    7452 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1881623798/002/docker-machine-driver-hyperkit]
I1028 05:05:17.348957    7452 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1881623798/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-564000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.783586208s)

                                                
                                                
-- stdout --
	* [force-systemd-env-564000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-564000" primary control-plane node in "force-systemd-env-564000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-564000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:05:11.654799    9215 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:05:11.654958    9215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:05:11.654962    9215 out.go:358] Setting ErrFile to fd 2...
	I1028 05:05:11.654964    9215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:05:11.655099    9215 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:05:11.656266    9215 out.go:352] Setting JSON to false
	I1028 05:05:11.673902    9215 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5682,"bootTime":1730111429,"procs":515,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:05:11.673975    9215 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:05:11.679688    9215 out.go:177] * [force-systemd-env-564000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:05:11.687667    9215 notify.go:220] Checking for updates...
	I1028 05:05:11.691643    9215 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:05:11.699625    9215 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:05:11.707575    9215 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:05:11.715614    9215 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:05:11.723600    9215 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:05:11.731582    9215 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1028 05:05:11.736026    9215 config.go:182] Loaded profile config "multinode-268000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:05:11.736107    9215 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:05:11.741604    9215 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 05:05:11.748446    9215 start.go:297] selected driver: qemu2
	I1028 05:05:11.748451    9215 start.go:901] validating driver "qemu2" against <nil>
	I1028 05:05:11.748457    9215 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:05:11.751129    9215 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 05:05:11.754646    9215 out.go:177] * Automatically selected the socket_vmnet network
	I1028 05:05:11.758689    9215 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 05:05:11.758704    9215 cni.go:84] Creating CNI manager for ""
	I1028 05:05:11.758724    9215 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 05:05:11.758728    9215 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 05:05:11.758764    9215 start.go:340] cluster config:
	{Name:force-systemd-env-564000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-564000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:05:11.763228    9215 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:05:11.771620    9215 out.go:177] * Starting "force-systemd-env-564000" primary control-plane node in "force-systemd-env-564000" cluster
	I1028 05:05:11.775471    9215 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:05:11.775484    9215 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 05:05:11.775492    9215 cache.go:56] Caching tarball of preloaded images
	I1028 05:05:11.775568    9215 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:05:11.775574    9215 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 05:05:11.775629    9215 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/force-systemd-env-564000/config.json ...
	I1028 05:05:11.775640    9215 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/force-systemd-env-564000/config.json: {Name:mk2d3684414e79abf3db71847f9db7c61fb89f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:05:11.775863    9215 start.go:360] acquireMachinesLock for force-systemd-env-564000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:05:11.775912    9215 start.go:364] duration metric: took 39.417µs to acquireMachinesLock for "force-systemd-env-564000"
	I1028 05:05:11.775923    9215 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-564000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-564000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:05:11.775961    9215 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:05:11.783637    9215 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1028 05:05:11.799976    9215 start.go:159] libmachine.API.Create for "force-systemd-env-564000" (driver="qemu2")
	I1028 05:05:11.800001    9215 client.go:168] LocalClient.Create starting
	I1028 05:05:11.800091    9215 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:05:11.800127    9215 main.go:141] libmachine: Decoding PEM data...
	I1028 05:05:11.800139    9215 main.go:141] libmachine: Parsing certificate...
	I1028 05:05:11.800180    9215 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:05:11.800209    9215 main.go:141] libmachine: Decoding PEM data...
	I1028 05:05:11.800215    9215 main.go:141] libmachine: Parsing certificate...
	I1028 05:05:11.800597    9215 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:05:11.959010    9215 main.go:141] libmachine: Creating SSH key...
	I1028 05:05:12.100077    9215 main.go:141] libmachine: Creating Disk image...
	I1028 05:05:12.100088    9215 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:05:12.100302    9215 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-env-564000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-env-564000/disk.qcow2
	I1028 05:05:12.110829    9215 main.go:141] libmachine: STDOUT: 
	I1028 05:05:12.110856    9215 main.go:141] libmachine: STDERR: 
	I1028 05:05:12.110948    9215 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-env-564000/disk.qcow2 +20000M
	I1028 05:05:12.120558    9215 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:05:12.120582    9215 main.go:141] libmachine: STDERR: 
	I1028 05:05:12.120597    9215 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-env-564000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-env-564000/disk.qcow2
	I1028 05:05:12.120603    9215 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:05:12.120623    9215 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:05:12.120653    9215 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-env-564000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-env-564000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-env-564000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:db:4d:93:9e:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-env-564000/disk.qcow2
	I1028 05:05:12.122728    9215 main.go:141] libmachine: STDOUT: 
	I1028 05:05:12.122744    9215 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:05:12.122765    9215 client.go:171] duration metric: took 322.765541ms to LocalClient.Create
	I1028 05:05:14.124834    9215 start.go:128] duration metric: took 2.348907792s to createHost
	I1028 05:05:14.124863    9215 start.go:83] releasing machines lock for "force-systemd-env-564000", held for 2.348997125s
	W1028 05:05:14.124880    9215 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:05:14.140801    9215 out.go:177] * Deleting "force-systemd-env-564000" in qemu2 ...
	W1028 05:05:14.161518    9215 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:05:14.161529    9215 start.go:729] Will try again in 5 seconds ...
	I1028 05:05:19.163690    9215 start.go:360] acquireMachinesLock for force-systemd-env-564000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:05:19.890103    9215 start.go:364] duration metric: took 726.312292ms to acquireMachinesLock for "force-systemd-env-564000"
	I1028 05:05:19.890238    9215 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-564000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-564000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:05:19.890549    9215 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:05:19.900041    9215 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1028 05:05:19.947946    9215 start.go:159] libmachine.API.Create for "force-systemd-env-564000" (driver="qemu2")
	I1028 05:05:19.947993    9215 client.go:168] LocalClient.Create starting
	I1028 05:05:19.948127    9215 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:05:19.948203    9215 main.go:141] libmachine: Decoding PEM data...
	I1028 05:05:19.948220    9215 main.go:141] libmachine: Parsing certificate...
	I1028 05:05:19.948307    9215 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:05:19.948371    9215 main.go:141] libmachine: Decoding PEM data...
	I1028 05:05:19.948387    9215 main.go:141] libmachine: Parsing certificate...
	I1028 05:05:19.949032    9215 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:05:20.118920    9215 main.go:141] libmachine: Creating SSH key...
	I1028 05:05:20.329776    9215 main.go:141] libmachine: Creating Disk image...
	I1028 05:05:20.329785    9215 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:05:20.329992    9215 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-env-564000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-env-564000/disk.qcow2
	I1028 05:05:20.340104    9215 main.go:141] libmachine: STDOUT: 
	I1028 05:05:20.340124    9215 main.go:141] libmachine: STDERR: 
	I1028 05:05:20.340203    9215 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-env-564000/disk.qcow2 +20000M
	I1028 05:05:20.348580    9215 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:05:20.348604    9215 main.go:141] libmachine: STDERR: 
	I1028 05:05:20.348623    9215 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-env-564000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-env-564000/disk.qcow2
	I1028 05:05:20.348629    9215 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:05:20.348638    9215 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:05:20.348676    9215 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-env-564000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-env-564000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-env-564000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:b0:09:4f:ed:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/force-systemd-env-564000/disk.qcow2
	I1028 05:05:20.350403    9215 main.go:141] libmachine: STDOUT: 
	I1028 05:05:20.350426    9215 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:05:20.350441    9215 client.go:171] duration metric: took 402.45125ms to LocalClient.Create
	I1028 05:05:22.352580    9215 start.go:128] duration metric: took 2.462048625s to createHost
	I1028 05:05:22.352641    9215 start.go:83] releasing machines lock for "force-systemd-env-564000", held for 2.462539125s
	W1028 05:05:22.353076    9215 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-564000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-564000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:05:22.369789    9215 out.go:201] 
	W1028 05:05:22.376827    9215 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:05:22.376880    9215 out.go:270] * 
	* 
	W1028 05:05:22.379847    9215 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:05:22.389459    9215 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-564000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-564000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-564000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.530041ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-564000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-564000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-564000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-10-28 05:05:22.488632 -0700 PDT m=+681.927162959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-564000 -n force-systemd-env-564000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-564000 -n force-systemd-env-564000: exit status 7 (36.293333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-564000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-564000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-564000
--- FAIL: TestForceSystemdEnv (10.98s)

                                                
                                    
x
+
TestErrorSpam/setup (9.88s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-220000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-220000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 --driver=qemu2 : exit status 80 (9.882294792s)

                                                
                                                
-- stdout --
	* [nospam-220000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-220000" primary control-plane node in "nospam-220000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-220000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-220000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-220000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-220000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-220000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=19875
- KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-220000" primary control-plane node in "nospam-220000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-220000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-220000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.88s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (10.06s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-238000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-238000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.980799958s)

                                                
                                                
-- stdout --
	* [functional-238000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-238000" primary control-plane node in "functional-238000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-238000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:57853 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:57853 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:57853 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-238000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2236: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-238000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2241: start stdout=* [functional-238000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=19875
- KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-238000" primary control-plane node in "functional-238000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-238000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2246: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:57853 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:57853 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:57853 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-238000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000: exit status 7 (75.929625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (10.06s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1028 04:55:03.626419    7452 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-238000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-238000 --alsologtostderr -v=8: exit status 80 (5.189862625s)

                                                
                                                
-- stdout --
	* [functional-238000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-238000" primary control-plane node in "functional-238000" cluster
	* Restarting existing qemu2 VM for "functional-238000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-238000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:55:03.660394    7668 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:55:03.660537    7668 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:55:03.660541    7668 out.go:358] Setting ErrFile to fd 2...
	I1028 04:55:03.660543    7668 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:55:03.660684    7668 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 04:55:03.661771    7668 out.go:352] Setting JSON to false
	I1028 04:55:03.679608    7668 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5074,"bootTime":1730111429,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:55:03.679688    7668 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:55:03.685013    7668 out.go:177] * [functional-238000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:55:03.692920    7668 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 04:55:03.692973    7668 notify.go:220] Checking for updates...
	I1028 04:55:03.698837    7668 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 04:55:03.701916    7668 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:55:03.703330    7668 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:55:03.706872    7668 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 04:55:03.709947    7668 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:55:03.713217    7668 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:55:03.713294    7668 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:55:03.717818    7668 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 04:55:03.724934    7668 start.go:297] selected driver: qemu2
	I1028 04:55:03.724940    7668 start.go:901] validating driver "qemu2" against &{Name:functional-238000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-238000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:55:03.725013    7668 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:55:03.727503    7668 cni.go:84] Creating CNI manager for ""
	I1028 04:55:03.727544    7668 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:55:03.727589    7668 start.go:340] cluster config:
	{Name:functional-238000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-238000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:55:03.732223    7668 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:55:03.739905    7668 out.go:177] * Starting "functional-238000" primary control-plane node in "functional-238000" cluster
	I1028 04:55:03.743725    7668 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:55:03.743746    7668 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:55:03.743754    7668 cache.go:56] Caching tarball of preloaded images
	I1028 04:55:03.743826    7668 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:55:03.743832    7668 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:55:03.743885    7668 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/functional-238000/config.json ...
	I1028 04:55:03.744346    7668 start.go:360] acquireMachinesLock for functional-238000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:55:03.744393    7668 start.go:364] duration metric: took 41.708µs to acquireMachinesLock for "functional-238000"
	I1028 04:55:03.744401    7668 start.go:96] Skipping create...Using existing machine configuration
	I1028 04:55:03.744405    7668 fix.go:54] fixHost starting: 
	I1028 04:55:03.744519    7668 fix.go:112] recreateIfNeeded on functional-238000: state=Stopped err=<nil>
	W1028 04:55:03.744526    7668 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 04:55:03.752820    7668 out.go:177] * Restarting existing qemu2 VM for "functional-238000" ...
	I1028 04:55:03.756863    7668 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:55:03.756896    7668 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:ad:da:29:b3:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/disk.qcow2
	I1028 04:55:03.759113    7668 main.go:141] libmachine: STDOUT: 
	I1028 04:55:03.759133    7668 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:55:03.759161    7668 fix.go:56] duration metric: took 14.754ms for fixHost
	I1028 04:55:03.759166    7668 start.go:83] releasing machines lock for "functional-238000", held for 14.768625ms
	W1028 04:55:03.759172    7668 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:55:03.759211    7668 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:55:03.759215    7668 start.go:729] Will try again in 5 seconds ...
	I1028 04:55:08.761552    7668 start.go:360] acquireMachinesLock for functional-238000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:55:08.762010    7668 start.go:364] duration metric: took 352.916µs to acquireMachinesLock for "functional-238000"
	I1028 04:55:08.762180    7668 start.go:96] Skipping create...Using existing machine configuration
	I1028 04:55:08.762202    7668 fix.go:54] fixHost starting: 
	I1028 04:55:08.762986    7668 fix.go:112] recreateIfNeeded on functional-238000: state=Stopped err=<nil>
	W1028 04:55:08.763010    7668 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 04:55:08.766597    7668 out.go:177] * Restarting existing qemu2 VM for "functional-238000" ...
	I1028 04:55:08.769389    7668 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:55:08.769627    7668 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:ad:da:29:b3:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/disk.qcow2
	I1028 04:55:08.780236    7668 main.go:141] libmachine: STDOUT: 
	I1028 04:55:08.780286    7668 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:55:08.780398    7668 fix.go:56] duration metric: took 18.199375ms for fixHost
	I1028 04:55:08.780421    7668 start.go:83] releasing machines lock for "functional-238000", held for 18.386042ms
	W1028 04:55:08.780609    7668 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-238000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-238000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:55:08.789274    7668 out.go:201] 
	W1028 04:55:08.793484    7668 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:55:08.793525    7668 out.go:270] * 
	* 
	W1028 04:55:08.796225    7668 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:55:08.803436    7668 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-238000 --alsologtostderr -v=8": exit status 80
functional_test.go:663: soft start took 5.191497834s for "functional-238000" cluster.
I1028 04:55:08.818301    7452 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000: exit status 7 (74.440583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (28.525375ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:687: expected current-context = "functional-238000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000: exit status 7 (34.598375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-238000 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-238000 get po -A: exit status 1 (26.735042ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-238000

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-238000 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-238000\n"*: args "kubectl --context functional-238000 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-238000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000: exit status 7 (35.344292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh sudo crictl images
functional_test.go:1124: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh sudo crictl images: exit status 83 (45.675417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test.go:1126: failed to get images by "out/minikube-darwin-arm64 -p functional-238000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1130: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (42.245833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test.go:1150: failed to manually delete image "out/minikube-darwin-arm64 -p functional-238000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (42.929375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (44.726166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test.go:1165: expected "out/minikube-darwin-arm64 -p functional-238000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 kubectl -- --context functional-238000 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 kubectl -- --context functional-238000 get pods: exit status 1 (704.454958ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-238000
	* no server found for cluster "functional-238000"

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-darwin-arm64 -p functional-238000 kubectl -- --context functional-238000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000: exit status 7 (36.20275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-238000 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-238000 get pods: exit status 1 (1.162226875s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-238000
	* no server found for cluster "functional-238000"

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-238000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000: exit status 7 (33.289958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.20s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-238000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-238000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.188554708s)

                                                
                                                
-- stdout --
	* [functional-238000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-238000" primary control-plane node in "functional-238000" cluster
	* Restarting existing qemu2 VM for "functional-238000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-238000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-238000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-238000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:761: restart took 5.189211542s for "functional-238000" cluster.
I1028 04:55:19.621399    7452 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000: exit status 7 (75.039042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-238000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-238000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (28.54775ms)

                                                
                                                
** stderr ** 
	error: context "functional-238000" does not exist

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-238000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000: exit status 7 (34.510125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 logs
functional_test.go:1236: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 logs: exit status 83 (79.57925ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-131000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
	|         | -p download-only-131000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
	| delete  | -p download-only-131000                                                  | download-only-131000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
	| start   | -o=json --download-only                                                  | download-only-803000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
	|         | -p download-only-803000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
	| delete  | -p download-only-803000                                                  | download-only-803000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
	| delete  | -p download-only-131000                                                  | download-only-131000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
	| delete  | -p download-only-803000                                                  | download-only-803000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
	| start   | --download-only -p                                                       | binary-mirror-237000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
	|         | binary-mirror-237000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:57821                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-237000                                                  | binary-mirror-237000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
	| addons  | enable dashboard -p                                                      | addons-578000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
	|         | addons-578000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-578000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
	|         | addons-578000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-578000 --wait=true                                             | addons-578000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	| delete  | -p addons-578000                                                         | addons-578000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
	| start   | -p nospam-220000 -n=1 --memory=2250 --wait=false                         | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-220000                                                         | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
	| start   | -p functional-238000                                                     | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-238000                                                     | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-238000 cache add                                              | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-238000 cache add                                              | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-238000 cache add                                              | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-238000 cache add                                              | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
	|         | minikube-local-cache-test:functional-238000                              |                      |         |         |                     |                     |
	| cache   | functional-238000 cache delete                                           | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
	|         | minikube-local-cache-test:functional-238000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
	| ssh     | functional-238000 ssh sudo                                               | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-238000                                                        | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-238000 ssh                                                    | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-238000 cache reload                                           | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
	| ssh     | functional-238000 ssh                                                    | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-238000 kubectl --                                             | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT |                     |
	|         | --context functional-238000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-238000                                                     | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 04:55:14
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 04:55:14.462186    7743 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:55:14.462334    7743 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:55:14.462335    7743 out.go:358] Setting ErrFile to fd 2...
	I1028 04:55:14.462337    7743 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:55:14.462462    7743 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 04:55:14.463613    7743 out.go:352] Setting JSON to false
	I1028 04:55:14.480867    7743 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5085,"bootTime":1730111429,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:55:14.480949    7743 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:55:14.484801    7743 out.go:177] * [functional-238000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:55:14.493515    7743 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 04:55:14.493561    7743 notify.go:220] Checking for updates...
	I1028 04:55:14.500408    7743 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 04:55:14.503438    7743 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:55:14.506424    7743 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:55:14.509444    7743 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 04:55:14.512495    7743 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:55:14.515788    7743 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:55:14.515840    7743 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:55:14.520378    7743 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 04:55:14.527421    7743 start.go:297] selected driver: qemu2
	I1028 04:55:14.527426    7743 start.go:901] validating driver "qemu2" against &{Name:functional-238000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-238000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:55:14.527489    7743 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:55:14.530076    7743 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:55:14.530100    7743 cni.go:84] Creating CNI manager for ""
	I1028 04:55:14.530125    7743 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:55:14.530164    7743 start.go:340] cluster config:
	{Name:functional-238000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-238000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:55:14.534691    7743 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:55:14.541201    7743 out.go:177] * Starting "functional-238000" primary control-plane node in "functional-238000" cluster
	I1028 04:55:14.545385    7743 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:55:14.545398    7743 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:55:14.545409    7743 cache.go:56] Caching tarball of preloaded images
	I1028 04:55:14.545485    7743 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:55:14.545489    7743 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:55:14.545546    7743 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/functional-238000/config.json ...
	I1028 04:55:14.546058    7743 start.go:360] acquireMachinesLock for functional-238000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:55:14.546101    7743 start.go:364] duration metric: took 38.916µs to acquireMachinesLock for "functional-238000"
	I1028 04:55:14.546107    7743 start.go:96] Skipping create...Using existing machine configuration
	I1028 04:55:14.546110    7743 fix.go:54] fixHost starting: 
	I1028 04:55:14.546224    7743 fix.go:112] recreateIfNeeded on functional-238000: state=Stopped err=<nil>
	W1028 04:55:14.546229    7743 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 04:55:14.553417    7743 out.go:177] * Restarting existing qemu2 VM for "functional-238000" ...
	I1028 04:55:14.557465    7743 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:55:14.557513    7743 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:ad:da:29:b3:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/disk.qcow2
	I1028 04:55:14.559745    7743 main.go:141] libmachine: STDOUT: 
	I1028 04:55:14.559759    7743 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:55:14.559793    7743 fix.go:56] duration metric: took 13.681ms for fixHost
	I1028 04:55:14.559797    7743 start.go:83] releasing machines lock for "functional-238000", held for 13.693166ms
	W1028 04:55:14.559801    7743 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:55:14.559832    7743 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:55:14.559836    7743 start.go:729] Will try again in 5 seconds ...
	I1028 04:55:19.562043    7743 start.go:360] acquireMachinesLock for functional-238000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:55:19.562405    7743 start.go:364] duration metric: took 307.542µs to acquireMachinesLock for "functional-238000"
	I1028 04:55:19.562526    7743 start.go:96] Skipping create...Using existing machine configuration
	I1028 04:55:19.562550    7743 fix.go:54] fixHost starting: 
	I1028 04:55:19.563351    7743 fix.go:112] recreateIfNeeded on functional-238000: state=Stopped err=<nil>
	W1028 04:55:19.563371    7743 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 04:55:19.566973    7743 out.go:177] * Restarting existing qemu2 VM for "functional-238000" ...
	I1028 04:55:19.574864    7743 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:55:19.575108    7743 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:ad:da:29:b3:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/disk.qcow2
	I1028 04:55:19.584781    7743 main.go:141] libmachine: STDOUT: 
	I1028 04:55:19.584825    7743 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:55:19.584894    7743 fix.go:56] duration metric: took 22.3595ms for fixHost
	I1028 04:55:19.584907    7743 start.go:83] releasing machines lock for "functional-238000", held for 22.488583ms
	W1028 04:55:19.585094    7743 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-238000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:55:19.593744    7743 out.go:201] 
	W1028 04:55:19.597903    7743 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:55:19.597941    7743 out.go:270] * 
	W1028 04:55:19.600859    7743 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:55:19.606735    7743 out.go:201] 
	
	
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test.go:1238: out/minikube-darwin-arm64 -p functional-238000 logs failed: exit status 83
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-131000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | -p download-only-131000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
| delete  | -p download-only-131000                                                  | download-only-131000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
| start   | -o=json --download-only                                                  | download-only-803000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | -p download-only-803000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
| delete  | -p download-only-803000                                                  | download-only-803000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
| delete  | -p download-only-131000                                                  | download-only-131000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
| delete  | -p download-only-803000                                                  | download-only-803000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
| start   | --download-only -p                                                       | binary-mirror-237000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | binary-mirror-237000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:57821                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-237000                                                  | binary-mirror-237000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
| addons  | enable dashboard -p                                                      | addons-578000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | addons-578000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-578000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | addons-578000                                                            |                      |         |         |                     |                     |
| start   | -p addons-578000 --wait=true                                             | addons-578000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-578000                                                         | addons-578000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
| start   | -p nospam-220000 -n=1 --memory=2250 --wait=false                         | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-220000                                                         | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
| start   | -p functional-238000                                                     | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-238000                                                     | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-238000 cache add                                              | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-238000 cache add                                              | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-238000 cache add                                              | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-238000 cache add                                              | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
|         | minikube-local-cache-test:functional-238000                              |                      |         |         |                     |                     |
| cache   | functional-238000 cache delete                                           | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
|         | minikube-local-cache-test:functional-238000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
| ssh     | functional-238000 ssh sudo                                               | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-238000                                                        | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-238000 ssh                                                    | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-238000 cache reload                                           | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
| ssh     | functional-238000 ssh                                                    | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-238000 kubectl --                                             | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT |                     |
|         | --context functional-238000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-238000                                                     | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/10/28 04:55:14
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.2 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1028 04:55:14.462186    7743 out.go:345] Setting OutFile to fd 1 ...
I1028 04:55:14.462334    7743 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 04:55:14.462335    7743 out.go:358] Setting ErrFile to fd 2...
I1028 04:55:14.462337    7743 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 04:55:14.462462    7743 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
I1028 04:55:14.463613    7743 out.go:352] Setting JSON to false
I1028 04:55:14.480867    7743 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5085,"bootTime":1730111429,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1028 04:55:14.480949    7743 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1028 04:55:14.484801    7743 out.go:177] * [functional-238000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1028 04:55:14.493515    7743 out.go:177]   - MINIKUBE_LOCATION=19875
I1028 04:55:14.493561    7743 notify.go:220] Checking for updates...
I1028 04:55:14.500408    7743 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
I1028 04:55:14.503438    7743 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1028 04:55:14.506424    7743 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1028 04:55:14.509444    7743 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
I1028 04:55:14.512495    7743 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1028 04:55:14.515788    7743 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 04:55:14.515840    7743 driver.go:394] Setting default libvirt URI to qemu:///system
I1028 04:55:14.520378    7743 out.go:177] * Using the qemu2 driver based on existing profile
I1028 04:55:14.527421    7743 start.go:297] selected driver: qemu2
I1028 04:55:14.527426    7743 start.go:901] validating driver "qemu2" against &{Name:functional-238000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:functional-238000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1028 04:55:14.527489    7743 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1028 04:55:14.530076    7743 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1028 04:55:14.530100    7743 cni.go:84] Creating CNI manager for ""
I1028 04:55:14.530125    7743 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1028 04:55:14.530164    7743 start.go:340] cluster config:
{Name:functional-238000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-238000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1028 04:55:14.534691    7743 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 04:55:14.541201    7743 out.go:177] * Starting "functional-238000" primary control-plane node in "functional-238000" cluster
I1028 04:55:14.545385    7743 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1028 04:55:14.545398    7743 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
I1028 04:55:14.545409    7743 cache.go:56] Caching tarball of preloaded images
I1028 04:55:14.545485    7743 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1028 04:55:14.545489    7743 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
I1028 04:55:14.545546    7743 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/functional-238000/config.json ...
I1028 04:55:14.546058    7743 start.go:360] acquireMachinesLock for functional-238000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1028 04:55:14.546101    7743 start.go:364] duration metric: took 38.916µs to acquireMachinesLock for "functional-238000"
I1028 04:55:14.546107    7743 start.go:96] Skipping create...Using existing machine configuration
I1028 04:55:14.546110    7743 fix.go:54] fixHost starting: 
I1028 04:55:14.546224    7743 fix.go:112] recreateIfNeeded on functional-238000: state=Stopped err=<nil>
W1028 04:55:14.546229    7743 fix.go:138] unexpected machine state, will restart: <nil>
I1028 04:55:14.553417    7743 out.go:177] * Restarting existing qemu2 VM for "functional-238000" ...
I1028 04:55:14.557465    7743 qemu.go:418] Using hvf for hardware acceleration
I1028 04:55:14.557513    7743 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:ad:da:29:b3:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/disk.qcow2
I1028 04:55:14.559745    7743 main.go:141] libmachine: STDOUT: 
I1028 04:55:14.559759    7743 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1028 04:55:14.559793    7743 fix.go:56] duration metric: took 13.681ms for fixHost
I1028 04:55:14.559797    7743 start.go:83] releasing machines lock for "functional-238000", held for 13.693166ms
W1028 04:55:14.559801    7743 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1028 04:55:14.559832    7743 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1028 04:55:14.559836    7743 start.go:729] Will try again in 5 seconds ...
I1028 04:55:19.562043    7743 start.go:360] acquireMachinesLock for functional-238000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1028 04:55:19.562405    7743 start.go:364] duration metric: took 307.542µs to acquireMachinesLock for "functional-238000"
I1028 04:55:19.562526    7743 start.go:96] Skipping create...Using existing machine configuration
I1028 04:55:19.562550    7743 fix.go:54] fixHost starting: 
I1028 04:55:19.563351    7743 fix.go:112] recreateIfNeeded on functional-238000: state=Stopped err=<nil>
W1028 04:55:19.563371    7743 fix.go:138] unexpected machine state, will restart: <nil>
I1028 04:55:19.566973    7743 out.go:177] * Restarting existing qemu2 VM for "functional-238000" ...
I1028 04:55:19.574864    7743 qemu.go:418] Using hvf for hardware acceleration
I1028 04:55:19.575108    7743 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:ad:da:29:b3:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/disk.qcow2
I1028 04:55:19.584781    7743 main.go:141] libmachine: STDOUT: 
I1028 04:55:19.584825    7743 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1028 04:55:19.584894    7743 fix.go:56] duration metric: took 22.3595ms for fixHost
I1028 04:55:19.584907    7743 start.go:83] releasing machines lock for "functional-238000", held for 22.488583ms
W1028 04:55:19.585094    7743 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-238000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1028 04:55:19.593744    7743 out.go:201] 
W1028 04:55:19.597903    7743 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1028 04:55:19.597941    7743 out.go:270] * 
W1028 04:55:19.600859    7743 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1028 04:55:19.606735    7743 out.go:201] 

                                                
                                                

                                                
                                                
* The control-plane node functional-238000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-238000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd483955267/001/logs.txt
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-131000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | -p download-only-131000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
| delete  | -p download-only-131000                                                  | download-only-131000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
| start   | -o=json --download-only                                                  | download-only-803000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | -p download-only-803000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
| delete  | -p download-only-803000                                                  | download-only-803000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
| delete  | -p download-only-131000                                                  | download-only-131000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
| delete  | -p download-only-803000                                                  | download-only-803000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
| start   | --download-only -p                                                       | binary-mirror-237000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | binary-mirror-237000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:57821                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-237000                                                  | binary-mirror-237000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
| addons  | enable dashboard -p                                                      | addons-578000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | addons-578000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-578000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | addons-578000                                                            |                      |         |         |                     |                     |
| start   | -p addons-578000 --wait=true                                             | addons-578000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-578000                                                         | addons-578000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
| start   | -p nospam-220000 -n=1 --memory=2250 --wait=false                         | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-220000 --log_dir                                                  | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-220000                                                         | nospam-220000        | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
| start   | -p functional-238000                                                     | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-238000                                                     | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-238000 cache add                                              | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-238000 cache add                                              | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-238000 cache add                                              | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-238000 cache add                                              | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
|         | minikube-local-cache-test:functional-238000                              |                      |         |         |                     |                     |
| cache   | functional-238000 cache delete                                           | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
|         | minikube-local-cache-test:functional-238000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
| ssh     | functional-238000 ssh sudo                                               | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-238000                                                        | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-238000 ssh                                                    | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-238000 cache reload                                           | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
| ssh     | functional-238000 ssh                                                    | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT | 28 Oct 24 04:55 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-238000 kubectl --                                             | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT |                     |
|         | --context functional-238000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-238000                                                     | functional-238000    | jenkins | v1.34.0 | 28 Oct 24 04:55 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/10/28 04:55:14
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.2 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1028 04:55:14.462186    7743 out.go:345] Setting OutFile to fd 1 ...
I1028 04:55:14.462334    7743 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 04:55:14.462335    7743 out.go:358] Setting ErrFile to fd 2...
I1028 04:55:14.462337    7743 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 04:55:14.462462    7743 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
I1028 04:55:14.463613    7743 out.go:352] Setting JSON to false
I1028 04:55:14.480867    7743 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5085,"bootTime":1730111429,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1028 04:55:14.480949    7743 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1028 04:55:14.484801    7743 out.go:177] * [functional-238000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1028 04:55:14.493515    7743 out.go:177]   - MINIKUBE_LOCATION=19875
I1028 04:55:14.493561    7743 notify.go:220] Checking for updates...
I1028 04:55:14.500408    7743 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
I1028 04:55:14.503438    7743 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1028 04:55:14.506424    7743 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1028 04:55:14.509444    7743 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
I1028 04:55:14.512495    7743 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1028 04:55:14.515788    7743 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 04:55:14.515840    7743 driver.go:394] Setting default libvirt URI to qemu:///system
I1028 04:55:14.520378    7743 out.go:177] * Using the qemu2 driver based on existing profile
I1028 04:55:14.527421    7743 start.go:297] selected driver: qemu2
I1028 04:55:14.527426    7743 start.go:901] validating driver "qemu2" against &{Name:functional-238000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:functional-238000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1028 04:55:14.527489    7743 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1028 04:55:14.530076    7743 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1028 04:55:14.530100    7743 cni.go:84] Creating CNI manager for ""
I1028 04:55:14.530125    7743 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1028 04:55:14.530164    7743 start.go:340] cluster config:
{Name:functional-238000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-238000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1028 04:55:14.534691    7743 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 04:55:14.541201    7743 out.go:177] * Starting "functional-238000" primary control-plane node in "functional-238000" cluster
I1028 04:55:14.545385    7743 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1028 04:55:14.545398    7743 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
I1028 04:55:14.545409    7743 cache.go:56] Caching tarball of preloaded images
I1028 04:55:14.545485    7743 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1028 04:55:14.545489    7743 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
I1028 04:55:14.545546    7743 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/functional-238000/config.json ...
I1028 04:55:14.546058    7743 start.go:360] acquireMachinesLock for functional-238000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1028 04:55:14.546101    7743 start.go:364] duration metric: took 38.916µs to acquireMachinesLock for "functional-238000"
I1028 04:55:14.546107    7743 start.go:96] Skipping create...Using existing machine configuration
I1028 04:55:14.546110    7743 fix.go:54] fixHost starting: 
I1028 04:55:14.546224    7743 fix.go:112] recreateIfNeeded on functional-238000: state=Stopped err=<nil>
W1028 04:55:14.546229    7743 fix.go:138] unexpected machine state, will restart: <nil>
I1028 04:55:14.553417    7743 out.go:177] * Restarting existing qemu2 VM for "functional-238000" ...
I1028 04:55:14.557465    7743 qemu.go:418] Using hvf for hardware acceleration
I1028 04:55:14.557513    7743 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:ad:da:29:b3:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/disk.qcow2
I1028 04:55:14.559745    7743 main.go:141] libmachine: STDOUT: 
I1028 04:55:14.559759    7743 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1028 04:55:14.559793    7743 fix.go:56] duration metric: took 13.681ms for fixHost
I1028 04:55:14.559797    7743 start.go:83] releasing machines lock for "functional-238000", held for 13.693166ms
W1028 04:55:14.559801    7743 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1028 04:55:14.559832    7743 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1028 04:55:14.559836    7743 start.go:729] Will try again in 5 seconds ...
I1028 04:55:19.562043    7743 start.go:360] acquireMachinesLock for functional-238000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1028 04:55:19.562405    7743 start.go:364] duration metric: took 307.542µs to acquireMachinesLock for "functional-238000"
I1028 04:55:19.562526    7743 start.go:96] Skipping create...Using existing machine configuration
I1028 04:55:19.562550    7743 fix.go:54] fixHost starting: 
I1028 04:55:19.563351    7743 fix.go:112] recreateIfNeeded on functional-238000: state=Stopped err=<nil>
W1028 04:55:19.563371    7743 fix.go:138] unexpected machine state, will restart: <nil>
I1028 04:55:19.566973    7743 out.go:177] * Restarting existing qemu2 VM for "functional-238000" ...
I1028 04:55:19.574864    7743 qemu.go:418] Using hvf for hardware acceleration
I1028 04:55:19.575108    7743 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:ad:da:29:b3:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/functional-238000/disk.qcow2
I1028 04:55:19.584781    7743 main.go:141] libmachine: STDOUT: 
I1028 04:55:19.584825    7743 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1028 04:55:19.584894    7743 fix.go:56] duration metric: took 22.3595ms for fixHost
I1028 04:55:19.584907    7743 start.go:83] releasing machines lock for "functional-238000", held for 22.488583ms
W1028 04:55:19.585094    7743 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-238000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1028 04:55:19.593744    7743 out.go:201] 
W1028 04:55:19.597903    7743 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1028 04:55:19.597941    7743 out.go:270] * 
W1028 04:55:19.600859    7743 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1028 04:55:19.606735    7743 out.go:201] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-238000 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-238000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.765625ms)

                                                
                                                
** stderr ** 
	error: context "functional-238000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-238000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-238000 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-238000 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-238000 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-238000 --alsologtostderr -v=1] stderr:
I1028 04:56:06.577399    8055 out.go:345] Setting OutFile to fd 1 ...
I1028 04:56:06.577796    8055 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 04:56:06.577800    8055 out.go:358] Setting ErrFile to fd 2...
I1028 04:56:06.577802    8055 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 04:56:06.577938    8055 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
I1028 04:56:06.578146    8055 mustload.go:65] Loading cluster: functional-238000
I1028 04:56:06.578373    8055 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 04:56:06.581738    8055 out.go:177] * The control-plane node functional-238000 host is not running: state=Stopped
I1028 04:56:06.587609    8055 out.go:177]   To start a cluster, run: "minikube start -p functional-238000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000: exit status 7 (45.915917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 status
functional_test.go:854: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 status: exit status 7 (34.620209ms)

                                                
                                                
-- stdout --
	functional-238000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:856: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-238000 status" : exit status 7
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (33.298666ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-238000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 status -o json: exit status 7 (33.465708ms)

                                                
                                                
-- stdout --
	{"Name":"functional-238000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-238000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000: exit status 7 (33.345416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-238000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1627: (dbg) Non-zero exit: kubectl --context functional-238000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.908291ms)

                                                
                                                
** stderr ** 
	error: context "functional-238000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-238000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-238000 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-238000 describe po hello-node-connect: exit status 1 (26.271542ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-238000

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-238000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-238000 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-238000 logs -l app=hello-node-connect: exit status 1 (26.236542ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-238000

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-238000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-238000 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-238000 describe svc hello-node-connect: exit status 1 (26.50775ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-238000

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-238000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000: exit status 7 (33.237875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-238000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000: exit status 7 (34.866958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "echo hello"
functional_test.go:1725: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "echo hello": exit status 83 (46.648459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test.go:1730: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-238000 ssh \"echo hello\"" : exit status 83
functional_test.go:1734: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-238000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-238000\"\n"*. args "out/minikube-darwin-arm64 -p functional-238000 ssh \"echo hello\""
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "cat /etc/hostname": exit status 83 (44.828667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test.go:1748: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-238000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1752: expected minikube ssh command output to be -"functional-238000"- but got *"* The control-plane node functional-238000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-238000\"\n"*. args "out/minikube-darwin-arm64 -p functional-238000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000: exit status 7 (34.875083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (57.067125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-238000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh -n functional-238000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh -n functional-238000 "sudo cat /home/docker/cp-test.txt": exit status 83 (50.034458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-238000 ssh -n functional-238000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-238000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-238000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 cp functional-238000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd14581357/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 cp functional-238000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd14581357/001/cp-test.txt: exit status 83 (47.562917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-238000 cp functional-238000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd14581357/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh -n functional-238000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh -n functional-238000 "sudo cat /home/docker/cp-test.txt": exit status 83 (44.734625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-238000 ssh -n functional-238000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd14581357/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control-plane node functional-238000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-238000\"\n",
+ 	"",
)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (54.987ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-238000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh -n functional-238000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh -n functional-238000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (44.675125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-238000 ssh -n functional-238000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-238000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-238000\"\n",
}, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7452/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "sudo cat /etc/test/nested/copy/7452/hosts"
functional_test.go:1931: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "sudo cat /etc/test/nested/copy/7452/hosts": exit status 83 (43.841583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test.go:1933: out/minikube-darwin-arm64 -p functional-238000 ssh "sudo cat /etc/test/nested/copy/7452/hosts" failed: exit status 83
functional_test.go:1936: file sync test content: * The control-plane node functional-238000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-238000"
functional_test.go:1946: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-238000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-238000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000: exit status 7 (34.761166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7452.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "sudo cat /etc/ssl/certs/7452.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "sudo cat /etc/ssl/certs/7452.pem": exit status 83 (46.316708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/7452.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-238000 ssh \"sudo cat /etc/ssl/certs/7452.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/7452.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-238000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-238000"
	"""
)
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7452.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "sudo cat /usr/share/ca-certificates/7452.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "sudo cat /usr/share/ca-certificates/7452.pem": exit status 83 (43.77575ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/usr/share/ca-certificates/7452.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-238000 ssh \"sudo cat /usr/share/ca-certificates/7452.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/7452.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-238000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-238000"
	"""
)
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (43.790042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-238000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-238000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-238000"
	"""
)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/74522.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "sudo cat /etc/ssl/certs/74522.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "sudo cat /etc/ssl/certs/74522.pem": exit status 83 (51.565583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/74522.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-238000 ssh \"sudo cat /etc/ssl/certs/74522.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/74522.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-238000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-238000"
	"""
)
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/74522.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "sudo cat /usr/share/ca-certificates/74522.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "sudo cat /usr/share/ca-certificates/74522.pem": exit status 83 (47.633083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/usr/share/ca-certificates/74522.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-238000 ssh \"sudo cat /usr/share/ca-certificates/74522.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/74522.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-238000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-238000"
	"""
)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (42.767708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-238000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-238000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-238000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000: exit status 7 (35.771959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-238000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-238000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.568667ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-238000

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-238000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-238000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-238000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-238000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-238000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-238000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-238000 -n functional-238000: exit status 7 (38.845584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "sudo systemctl is-active crio": exit status 83 (42.971833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test.go:2030: output of 
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2033: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-238000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-238000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 version -o=json --components
functional_test.go:2270: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 version -o=json --components: exit status 83 (49.802292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test.go:2272: error version: exit status 83
functional_test.go:2277: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-238000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-238000"
functional_test.go:2277: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-238000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-238000"
functional_test.go:2277: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-238000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-238000"
functional_test.go:2277: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-238000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-238000"
functional_test.go:2277: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-238000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-238000"
functional_test.go:2277: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-238000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-238000"
functional_test.go:2277: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-238000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-238000"
functional_test.go:2277: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-238000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-238000"
functional_test.go:2277: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-238000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-238000"
functional_test.go:2277: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-238000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-238000"
--- FAIL: TestFunctional/parallel/Version/components (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-238000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-238000 image ls --format short --alsologtostderr:
I1028 04:56:07.021353    8070 out.go:345] Setting OutFile to fd 1 ...
I1028 04:56:07.021568    8070 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 04:56:07.021571    8070 out.go:358] Setting ErrFile to fd 2...
I1028 04:56:07.021574    8070 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 04:56:07.021708    8070 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
I1028 04:56:07.022171    8070 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 04:56:07.022238    8070 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-238000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-238000 image ls --format table --alsologtostderr:
I1028 04:56:07.267041    8082 out.go:345] Setting OutFile to fd 1 ...
I1028 04:56:07.267222    8082 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 04:56:07.267225    8082 out.go:358] Setting ErrFile to fd 2...
I1028 04:56:07.267227    8082 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 04:56:07.267344    8082 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
I1028 04:56:07.267760    8082 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 04:56:07.267820    8082 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-238000 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-238000 image ls --format json --alsologtostderr:
I1028 04:56:07.226784    8080 out.go:345] Setting OutFile to fd 1 ...
I1028 04:56:07.226963    8080 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 04:56:07.226967    8080 out.go:358] Setting ErrFile to fd 2...
I1028 04:56:07.226969    8080 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 04:56:07.227090    8080 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
I1028 04:56:07.227541    8080 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 04:56:07.227606    8080 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-238000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-238000 image ls --format yaml --alsologtostderr:
I1028 04:56:07.061734    8072 out.go:345] Setting OutFile to fd 1 ...
I1028 04:56:07.061922    8072 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 04:56:07.061926    8072 out.go:358] Setting ErrFile to fd 2...
I1028 04:56:07.061928    8072 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 04:56:07.062054    8072 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
I1028 04:56:07.062554    8072 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 04:56:07.062613    8072 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh pgrep buildkitd: exit status 83 (45.983416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 image build -t localhost/my-image:functional-238000 testdata/build --alsologtostderr
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-238000 image build -t localhost/my-image:functional-238000 testdata/build --alsologtostderr:
I1028 04:56:07.147032    8076 out.go:345] Setting OutFile to fd 1 ...
I1028 04:56:07.147536    8076 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 04:56:07.147540    8076 out.go:358] Setting ErrFile to fd 2...
I1028 04:56:07.147542    8076 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 04:56:07.147689    8076 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
I1028 04:56:07.148116    8076 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 04:56:07.148623    8076 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 04:56:07.148872    8076 build_images.go:133] succeeded building to: 
I1028 04:56:07.148876    8076 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 image ls
functional_test.go:446: expected "localhost/my-image:functional-238000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-238000 docker-env) && out/minikube-darwin-arm64 status -p functional-238000"
functional_test.go:499: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-238000 docker-env) && out/minikube-darwin-arm64 status -p functional-238000": exit status 1 (45.818875ms)
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 update-context --alsologtostderr -v=2: exit status 83 (48.727958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:56:06.878061    8064 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:56:06.879066    8064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:56:06.879071    8064 out.go:358] Setting ErrFile to fd 2...
	I1028 04:56:06.879074    8064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:56:06.879224    8064 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 04:56:06.879440    8064 mustload.go:65] Loading cluster: functional-238000
	I1028 04:56:06.879645    8064 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:56:06.884240    8064 out.go:177] * The control-plane node functional-238000 host is not running: state=Stopped
	I1028 04:56:06.888304    8064 out.go:177]   To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-238000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-238000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-238000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 update-context --alsologtostderr -v=2: exit status 83 (47.594583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:56:06.973643    8068 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:56:06.973800    8068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:56:06.973803    8068 out.go:358] Setting ErrFile to fd 2...
	I1028 04:56:06.973806    8068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:56:06.973940    8068 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 04:56:06.974181    8068 mustload.go:65] Loading cluster: functional-238000
	I1028 04:56:06.974405    8068 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:56:06.979262    8068 out.go:177] * The control-plane node functional-238000 host is not running: state=Stopped
	I1028 04:56:06.983282    8068 out.go:177]   To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-238000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-238000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-238000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 update-context --alsologtostderr -v=2: exit status 83 (46.733792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:56:06.925869    8066 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:56:06.926048    8066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:56:06.926052    8066 out.go:358] Setting ErrFile to fd 2...
	I1028 04:56:06.926054    8066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:56:06.926183    8066 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 04:56:06.926436    8066 mustload.go:65] Loading cluster: functional-238000
	I1028 04:56:06.926631    8066 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:56:06.931249    8066 out.go:177] * The control-plane node functional-238000 host is not running: state=Stopped
	I1028 04:56:06.935252    8066 out.go:177]   To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-238000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-238000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-238000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-238000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1437: (dbg) Non-zero exit: kubectl --context functional-238000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (27.347084ms)

                                                
                                                
** stderr ** 
	error: context "functional-238000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-238000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 service list: exit status 83 (48.193041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test.go:1461: failed to do service list. args "out/minikube-darwin-arm64 -p functional-238000 service list" : exit status 83
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-238000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-238000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 service list -o json: exit status 83 (49.838833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test.go:1491: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-238000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 service --namespace=default --https --url hello-node: exit status 83 (45.809916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test.go:1511: failed to get service url. args "out/minikube-darwin-arm64 -p functional-238000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 service hello-node --url --format={{.IP}}: exit status 83 (47.757916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-238000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1548: "* The control-plane node functional-238000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-238000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 service hello-node --url: exit status 83 (46.979875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test.go:1561: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-238000 service hello-node --url": exit status 83
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-238000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-238000"
functional_test.go:1569: failed to parse "* The control-plane node functional-238000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-238000\"": parse "* The control-plane node functional-238000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-238000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-238000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-238000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I1028 04:55:21.561608    7861 out.go:345] Setting OutFile to fd 1 ...
I1028 04:55:21.561824    7861 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 04:55:21.561827    7861 out.go:358] Setting ErrFile to fd 2...
I1028 04:55:21.561830    7861 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 04:55:21.561965    7861 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
I1028 04:55:21.562201    7861 mustload.go:65] Loading cluster: functional-238000
I1028 04:55:21.562426    7861 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 04:55:21.567755    7861 out.go:177] * The control-plane node functional-238000 host is not running: state=Stopped
I1028 04:55:21.579755    7861 out.go:177]   To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
stdout: * The control-plane node functional-238000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-238000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-238000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-238000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-238000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-238000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7860: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-238000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-238000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-238000": client config: context "functional-238000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (78.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1028 04:55:21.640095    7452 retry.go:31] will retry after 1.502203673s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-238000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-238000 get svc nginx-svc: exit status 1 (70.439834ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-238000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-238000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (78.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 image load --daemon kicbase/echo-server:functional-238000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-238000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 image load --daemon kicbase/echo-server:functional-238000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-238000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
I1028 04:55:23.144719    7452 retry.go:31] will retry after 3.805600176s: Temporary Error: Get "http:": http: no Host in request URL
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-238000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 image load --daemon kicbase/echo-server:functional-238000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-238000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 image save kicbase/echo-server:functional-238000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-238000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1028 04:56:40.124053    7452 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.029303458s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 12 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1028 04:57:05.262268    7452 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 04:57:15.264672    7452 retry.go:31] will retry after 3.428381497s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1028 04:57:28.697599    7452 retry.go:31] will retry after 3.405909064s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": dial tcp: lookup nginx-svc.default.svc.cluster.local. on 8.8.8.8:53: read udp 207.254.73.72:64784->10.96.0.10:53: i/o timeout
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-586000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-586000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (10.09551525s)

                                                
                                                
-- stdout --
	* [ha-586000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-586000" primary control-plane node in "ha-586000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-586000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:57:35.675856    8106 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:57:35.676035    8106 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:57:35.676038    8106 out.go:358] Setting ErrFile to fd 2...
	I1028 04:57:35.676041    8106 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:57:35.676176    8106 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 04:57:35.677310    8106 out.go:352] Setting JSON to false
	I1028 04:57:35.694940    8106 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5226,"bootTime":1730111429,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:57:35.695017    8106 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:57:35.700591    8106 out.go:177] * [ha-586000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:57:35.708462    8106 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 04:57:35.708514    8106 notify.go:220] Checking for updates...
	I1028 04:57:35.715552    8106 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 04:57:35.718463    8106 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:57:35.721538    8106 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:57:35.724553    8106 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 04:57:35.725911    8106 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:57:35.728770    8106 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:57:35.732531    8106 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 04:57:35.737548    8106 start.go:297] selected driver: qemu2
	I1028 04:57:35.737554    8106 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:57:35.737560    8106 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:57:35.739914    8106 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 04:57:35.742574    8106 out.go:177] * Automatically selected the socket_vmnet network
	I1028 04:57:35.745627    8106 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 04:57:35.745645    8106 cni.go:84] Creating CNI manager for ""
	I1028 04:57:35.745664    8106 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1028 04:57:35.745668    8106 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 04:57:35.745694    8106 start.go:340] cluster config:
	{Name:ha-586000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-586000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:57:35.750452    8106 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:57:35.758599    8106 out.go:177] * Starting "ha-586000" primary control-plane node in "ha-586000" cluster
	I1028 04:57:35.762590    8106 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:57:35.762605    8106 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:57:35.762613    8106 cache.go:56] Caching tarball of preloaded images
	I1028 04:57:35.762692    8106 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 04:57:35.762698    8106 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 04:57:35.762890    8106 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/ha-586000/config.json ...
	I1028 04:57:35.762901    8106 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/ha-586000/config.json: {Name:mk96437e196ea7de18c3e71e96c81c21d516dddb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:57:35.763257    8106 start.go:360] acquireMachinesLock for ha-586000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:57:35.763305    8106 start.go:364] duration metric: took 42.958µs to acquireMachinesLock for "ha-586000"
	I1028 04:57:35.763317    8106 start.go:93] Provisioning new machine with config: &{Name:ha-586000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.2 ClusterName:ha-586000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:57:35.763344    8106 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:57:35.766582    8106 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 04:57:35.782530    8106 start.go:159] libmachine.API.Create for "ha-586000" (driver="qemu2")
	I1028 04:57:35.782556    8106 client.go:168] LocalClient.Create starting
	I1028 04:57:35.782620    8106 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 04:57:35.782657    8106 main.go:141] libmachine: Decoding PEM data...
	I1028 04:57:35.782669    8106 main.go:141] libmachine: Parsing certificate...
	I1028 04:57:35.782704    8106 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 04:57:35.782732    8106 main.go:141] libmachine: Decoding PEM data...
	I1028 04:57:35.782740    8106 main.go:141] libmachine: Parsing certificate...
	I1028 04:57:35.783209    8106 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:57:35.941272    8106 main.go:141] libmachine: Creating SSH key...
	I1028 04:57:36.034657    8106 main.go:141] libmachine: Creating Disk image...
	I1028 04:57:36.034662    8106 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:57:36.034859    8106 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/disk.qcow2
	I1028 04:57:36.045213    8106 main.go:141] libmachine: STDOUT: 
	I1028 04:57:36.045235    8106 main.go:141] libmachine: STDERR: 
	I1028 04:57:36.045287    8106 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/disk.qcow2 +20000M
	I1028 04:57:36.053786    8106 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:57:36.053806    8106 main.go:141] libmachine: STDERR: 
	I1028 04:57:36.053825    8106 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/disk.qcow2
	I1028 04:57:36.053832    8106 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:57:36.053844    8106 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:57:36.053876    8106 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:04:c8:7c:46:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/disk.qcow2
	I1028 04:57:36.055689    8106 main.go:141] libmachine: STDOUT: 
	I1028 04:57:36.055704    8106 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:57:36.055723    8106 client.go:171] duration metric: took 273.161541ms to LocalClient.Create
	I1028 04:57:38.057994    8106 start.go:128] duration metric: took 2.294619458s to createHost
	I1028 04:57:38.058098    8106 start.go:83] releasing machines lock for "ha-586000", held for 2.294773292s
	W1028 04:57:38.058160    8106 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:57:38.069553    8106 out.go:177] * Deleting "ha-586000" in qemu2 ...
	W1028 04:57:38.100492    8106 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:57:38.100518    8106 start.go:729] Will try again in 5 seconds ...
	I1028 04:57:43.102789    8106 start.go:360] acquireMachinesLock for ha-586000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 04:57:43.103339    8106 start.go:364] duration metric: took 457.875µs to acquireMachinesLock for "ha-586000"
	I1028 04:57:43.103459    8106 start.go:93] Provisioning new machine with config: &{Name:ha-586000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.2 ClusterName:ha-586000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 04:57:43.103770    8106 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 04:57:43.119636    8106 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 04:57:43.168790    8106 start.go:159] libmachine.API.Create for "ha-586000" (driver="qemu2")
	I1028 04:57:43.168839    8106 client.go:168] LocalClient.Create starting
	I1028 04:57:43.168959    8106 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 04:57:43.169058    8106 main.go:141] libmachine: Decoding PEM data...
	I1028 04:57:43.169080    8106 main.go:141] libmachine: Parsing certificate...
	I1028 04:57:43.169139    8106 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 04:57:43.169195    8106 main.go:141] libmachine: Decoding PEM data...
	I1028 04:57:43.169209    8106 main.go:141] libmachine: Parsing certificate...
	I1028 04:57:43.169899    8106 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 04:57:43.339323    8106 main.go:141] libmachine: Creating SSH key...
	I1028 04:57:43.674365    8106 main.go:141] libmachine: Creating Disk image...
	I1028 04:57:43.674378    8106 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 04:57:43.674571    8106 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/disk.qcow2
	I1028 04:57:43.684680    8106 main.go:141] libmachine: STDOUT: 
	I1028 04:57:43.684704    8106 main.go:141] libmachine: STDERR: 
	I1028 04:57:43.684771    8106 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/disk.qcow2 +20000M
	I1028 04:57:43.693385    8106 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 04:57:43.693400    8106 main.go:141] libmachine: STDERR: 
	I1028 04:57:43.693412    8106 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/disk.qcow2
	I1028 04:57:43.693416    8106 main.go:141] libmachine: Starting QEMU VM...
	I1028 04:57:43.693421    8106 qemu.go:418] Using hvf for hardware acceleration
	I1028 04:57:43.693460    8106 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:dd:a6:88:ef:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/disk.qcow2
	I1028 04:57:43.695251    8106 main.go:141] libmachine: STDOUT: 
	I1028 04:57:43.695266    8106 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 04:57:43.695282    8106 client.go:171] duration metric: took 526.43675ms to LocalClient.Create
	I1028 04:57:45.697508    8106 start.go:128] duration metric: took 2.593700084s to createHost
	I1028 04:57:45.697564    8106 start.go:83] releasing machines lock for "ha-586000", held for 2.594189375s
	W1028 04:57:45.697915    8106 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-586000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-586000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 04:57:45.705520    8106 out.go:201] 
	W1028 04:57:45.713435    8106 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 04:57:45.713466    8106 out.go:270] * 
	* 
	W1028 04:57:45.715948    8106 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:57:45.723531    8106 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-586000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000: exit status 7 (74.282166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (120.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-586000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-586000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (64.83875ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-586000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-586000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-586000 -- rollout status deployment/busybox: exit status 1 (62.474541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-586000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (62.987708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-586000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:57:46.005356    7452 retry.go:31] will retry after 742.834187ms: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.940375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-586000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:57:46.859524    7452 retry.go:31] will retry after 1.998170424s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.113917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-586000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:57:48.969104    7452 retry.go:31] will retry after 2.822890878s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.261917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-586000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:57:51.903641    7452 retry.go:31] will retry after 3.069186033s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (111.05925ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-586000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:57:55.086243    7452 retry.go:31] will retry after 7.443068374s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.686333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-586000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:58:02.640405    7452 retry.go:31] will retry after 4.064424049s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.041792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-586000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:58:06.817206    7452 retry.go:31] will retry after 14.666992315s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.170625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-586000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:58:21.596796    7452 retry.go:31] will retry after 17.994720268s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.147667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-586000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:58:39.702285    7452 retry.go:31] will retry after 37.554568425s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.195ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-586000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 04:59:17.366789    7452 retry.go:31] will retry after 28.492816742s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.76975ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-586000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.796541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-586000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-586000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-586000 -- exec  -- nslookup kubernetes.io: exit status 1 (61.903ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-586000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-586000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-586000 -- exec  -- nslookup kubernetes.default: exit status 1 (62.339458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-586000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-586000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-586000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (62.47275ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-586000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000: exit status 7 (34.571334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (120.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-586000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.657375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-586000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000: exit status 7 (35.082ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-586000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-586000 -v=7 --alsologtostderr: exit status 83 (47.322625ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-586000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-586000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:59:46.384838    8199 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:59:46.385197    8199 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:59:46.385200    8199 out.go:358] Setting ErrFile to fd 2...
	I1028 04:59:46.385203    8199 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:59:46.385371    8199 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 04:59:46.385593    8199 mustload.go:65] Loading cluster: ha-586000
	I1028 04:59:46.386095    8199 config.go:182] Loaded profile config "ha-586000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:59:46.391218    8199 out.go:177] * The control-plane node ha-586000 host is not running: state=Stopped
	I1028 04:59:46.396180    8199 out.go:177]   To start a cluster, run: "minikube start -p ha-586000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-586000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000: exit status 7 (34.622166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-586000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-586000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.025709ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-586000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-586000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-586000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000: exit status 7 (35.412208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-586000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-586000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-586000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-586000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-586000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-586000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-586000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-586000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000: exit status 7 (34.746375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 -p ha-586000 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-586000 status --output json -v=7 --alsologtostderr: exit status 7 (34.830875ms)

                                                
                                                
-- stdout --
	{"Name":"ha-586000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:59:46.619559    8211 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:59:46.619740    8211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:59:46.619744    8211 out.go:358] Setting ErrFile to fd 2...
	I1028 04:59:46.619746    8211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:59:46.619892    8211 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 04:59:46.620014    8211 out.go:352] Setting JSON to true
	I1028 04:59:46.620024    8211 mustload.go:65] Loading cluster: ha-586000
	I1028 04:59:46.620070    8211 notify.go:220] Checking for updates...
	I1028 04:59:46.620239    8211 config.go:182] Loaded profile config "ha-586000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:59:46.620248    8211 status.go:174] checking status of ha-586000 ...
	I1028 04:59:46.620490    8211 status.go:371] ha-586000 host status = "Stopped" (err=<nil>)
	I1028 04:59:46.620494    8211 status.go:384] host is not running, skipping remaining checks
	I1028 04:59:46.620496    8211 status.go:176] ha-586000 status: &{Name:ha-586000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:335: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-586000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000: exit status 7 (34.745416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p ha-586000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-586000 node stop m02 -v=7 --alsologtostderr: exit status 85 (51.45375ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:59:46.689884    8215 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:59:46.690336    8215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:59:46.690340    8215 out.go:358] Setting ErrFile to fd 2...
	I1028 04:59:46.690343    8215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:59:46.690510    8215 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 04:59:46.690761    8215 mustload.go:65] Loading cluster: ha-586000
	I1028 04:59:46.690982    8215 config.go:182] Loaded profile config "ha-586000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:59:46.695536    8215 out.go:201] 
	W1028 04:59:46.698469    8215 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1028 04:59:46.698474    8215 out.go:270] * 
	* 
	W1028 04:59:46.700417    8215 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:59:46.703476    8215 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-586000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr: exit status 7 (35.11325ms)

                                                
                                                
-- stdout --
	ha-586000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:59:46.741744    8217 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:59:46.741909    8217 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:59:46.741912    8217 out.go:358] Setting ErrFile to fd 2...
	I1028 04:59:46.741915    8217 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:59:46.742041    8217 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 04:59:46.742154    8217 out.go:352] Setting JSON to false
	I1028 04:59:46.742166    8217 mustload.go:65] Loading cluster: ha-586000
	I1028 04:59:46.742222    8217 notify.go:220] Checking for updates...
	I1028 04:59:46.742356    8217 config.go:182] Loaded profile config "ha-586000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:59:46.742366    8217 status.go:174] checking status of ha-586000 ...
	I1028 04:59:46.742621    8217 status.go:371] ha-586000 host status = "Stopped" (err=<nil>)
	I1028 04:59:46.742624    8217 status.go:384] host is not running, skipping remaining checks
	I1028 04:59:46.742627    8217 status.go:176] ha-586000 status: &{Name:ha-586000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr": ha-586000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:380: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr": ha-586000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:383: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr": ha-586000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:386: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr": ha-586000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000: exit status 7 (34.587208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-586000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-586000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-586000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-586000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000: exit status 7 (34.589084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (50.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p ha-586000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-586000 node start m02 -v=7 --alsologtostderr: exit status 85 (52.848667ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:59:46.898663    8226 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:59:46.899078    8226 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:59:46.899082    8226 out.go:358] Setting ErrFile to fd 2...
	I1028 04:59:46.899085    8226 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:59:46.899263    8226 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 04:59:46.899556    8226 mustload.go:65] Loading cluster: ha-586000
	I1028 04:59:46.899742    8226 config.go:182] Loaded profile config "ha-586000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:59:46.904518    8226 out.go:201] 
	W1028 04:59:46.907505    8226 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1028 04:59:46.907511    8226 out.go:270] * 
	* 
	W1028 04:59:46.909257    8226 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:59:46.913488    8226 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1028 04:59:46.898663    8226 out.go:345] Setting OutFile to fd 1 ...
I1028 04:59:46.899078    8226 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 04:59:46.899082    8226 out.go:358] Setting ErrFile to fd 2...
I1028 04:59:46.899085    8226 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 04:59:46.899263    8226 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
I1028 04:59:46.899556    8226 mustload.go:65] Loading cluster: ha-586000
I1028 04:59:46.899742    8226 config.go:182] Loaded profile config "ha-586000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 04:59:46.904518    8226 out.go:201] 
W1028 04:59:46.907505    8226 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1028 04:59:46.907511    8226 out.go:270] * 
* 
W1028 04:59:46.909257    8226 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1028 04:59:46.913488    8226 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-586000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr: exit status 7 (34.60075ms)

                                                
                                                
-- stdout --
	ha-586000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:59:46.951080    8228 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:59:46.951262    8228 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:59:46.951267    8228 out.go:358] Setting ErrFile to fd 2...
	I1028 04:59:46.951270    8228 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:59:46.951390    8228 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 04:59:46.951522    8228 out.go:352] Setting JSON to false
	I1028 04:59:46.951531    8228 mustload.go:65] Loading cluster: ha-586000
	I1028 04:59:46.951596    8228 notify.go:220] Checking for updates...
	I1028 04:59:46.951756    8228 config.go:182] Loaded profile config "ha-586000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:59:46.951766    8228 status.go:174] checking status of ha-586000 ...
	I1028 04:59:46.952016    8228 status.go:371] ha-586000 host status = "Stopped" (err=<nil>)
	I1028 04:59:46.952019    8228 status.go:384] host is not running, skipping remaining checks
	I1028 04:59:46.952021    8228 status.go:176] ha-586000 status: &{Name:ha-586000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 04:59:46.952944    7452 retry.go:31] will retry after 539.098078ms: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr: exit status 7 (80.046958ms)

                                                
                                                
-- stdout --
	ha-586000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:59:47.572230    8230 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:59:47.572468    8230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:59:47.572472    8230 out.go:358] Setting ErrFile to fd 2...
	I1028 04:59:47.572476    8230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:59:47.572622    8230 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 04:59:47.572779    8230 out.go:352] Setting JSON to false
	I1028 04:59:47.572792    8230 mustload.go:65] Loading cluster: ha-586000
	I1028 04:59:47.572826    8230 notify.go:220] Checking for updates...
	I1028 04:59:47.573071    8230 config.go:182] Loaded profile config "ha-586000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:59:47.573081    8230 status.go:174] checking status of ha-586000 ...
	I1028 04:59:47.573370    8230 status.go:371] ha-586000 host status = "Stopped" (err=<nil>)
	I1028 04:59:47.573375    8230 status.go:384] host is not running, skipping remaining checks
	I1028 04:59:47.573377    8230 status.go:176] ha-586000 status: &{Name:ha-586000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 04:59:47.574417    7452 retry.go:31] will retry after 1.853020521s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr: exit status 7 (80.111208ms)

                                                
                                                
-- stdout --
	ha-586000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:59:49.507855    8232 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:59:49.508071    8232 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:59:49.508076    8232 out.go:358] Setting ErrFile to fd 2...
	I1028 04:59:49.508079    8232 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:59:49.508237    8232 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 04:59:49.508389    8232 out.go:352] Setting JSON to false
	I1028 04:59:49.508401    8232 mustload.go:65] Loading cluster: ha-586000
	I1028 04:59:49.508432    8232 notify.go:220] Checking for updates...
	I1028 04:59:49.508645    8232 config.go:182] Loaded profile config "ha-586000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:59:49.508655    8232 status.go:174] checking status of ha-586000 ...
	I1028 04:59:49.508950    8232 status.go:371] ha-586000 host status = "Stopped" (err=<nil>)
	I1028 04:59:49.508955    8232 status.go:384] host is not running, skipping remaining checks
	I1028 04:59:49.508957    8232 status.go:176] ha-586000 status: &{Name:ha-586000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 04:59:49.509992    7452 retry.go:31] will retry after 1.327518782s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr: exit status 7 (80.953792ms)

                                                
                                                
-- stdout --
	ha-586000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:59:50.918540    8234 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:59:50.918768    8234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:59:50.918772    8234 out.go:358] Setting ErrFile to fd 2...
	I1028 04:59:50.918776    8234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:59:50.918959    8234 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 04:59:50.919142    8234 out.go:352] Setting JSON to false
	I1028 04:59:50.919155    8234 mustload.go:65] Loading cluster: ha-586000
	I1028 04:59:50.919198    8234 notify.go:220] Checking for updates...
	I1028 04:59:50.919409    8234 config.go:182] Loaded profile config "ha-586000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:59:50.919419    8234 status.go:174] checking status of ha-586000 ...
	I1028 04:59:50.919737    8234 status.go:371] ha-586000 host status = "Stopped" (err=<nil>)
	I1028 04:59:50.919742    8234 status.go:384] host is not running, skipping remaining checks
	I1028 04:59:50.919744    8234 status.go:176] ha-586000 status: &{Name:ha-586000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 04:59:50.920827    7452 retry.go:31] will retry after 3.314871292s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr: exit status 7 (75.823333ms)

                                                
                                                
-- stdout --
	ha-586000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:59:54.311693    8239 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:59:54.311919    8239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:59:54.311923    8239 out.go:358] Setting ErrFile to fd 2...
	I1028 04:59:54.311926    8239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:59:54.312091    8239 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 04:59:54.312248    8239 out.go:352] Setting JSON to false
	I1028 04:59:54.312262    8239 mustload.go:65] Loading cluster: ha-586000
	I1028 04:59:54.312302    8239 notify.go:220] Checking for updates...
	I1028 04:59:54.312526    8239 config.go:182] Loaded profile config "ha-586000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:59:54.312535    8239 status.go:174] checking status of ha-586000 ...
	I1028 04:59:54.312870    8239 status.go:371] ha-586000 host status = "Stopped" (err=<nil>)
	I1028 04:59:54.312875    8239 status.go:384] host is not running, skipping remaining checks
	I1028 04:59:54.312878    8239 status.go:176] ha-586000 status: &{Name:ha-586000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 04:59:54.313920    7452 retry.go:31] will retry after 6.710918125s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr: exit status 7 (62.483459ms)

                                                
                                                
-- stdout --
	ha-586000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:00:01.086931    8243 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:00:01.087191    8243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:00:01.087197    8243 out.go:358] Setting ErrFile to fd 2...
	I1028 05:00:01.087201    8243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:00:01.087408    8243 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:00:01.087607    8243 out.go:352] Setting JSON to false
	I1028 05:00:01.087627    8243 mustload.go:65] Loading cluster: ha-586000
	I1028 05:00:01.087677    8243 notify.go:220] Checking for updates...
	I1028 05:00:01.087980    8243 config.go:182] Loaded profile config "ha-586000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:00:01.087993    8243 status.go:174] checking status of ha-586000 ...
	I1028 05:00:01.088412    8243 status.go:371] ha-586000 host status = "Stopped" (err=<nil>)
	I1028 05:00:01.088418    8243 status.go:384] host is not running, skipping remaining checks
	I1028 05:00:01.088421    8243 status.go:176] ha-586000 status: &{Name:ha-586000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 05:00:01.089700    7452 retry.go:31] will retry after 10.28141974s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr: exit status 7 (78.585041ms)

                                                
                                                
-- stdout --
	ha-586000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:00:11.449987    8517 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:00:11.450209    8517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:00:11.450213    8517 out.go:358] Setting ErrFile to fd 2...
	I1028 05:00:11.450215    8517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:00:11.450381    8517 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:00:11.450540    8517 out.go:352] Setting JSON to false
	I1028 05:00:11.450552    8517 mustload.go:65] Loading cluster: ha-586000
	I1028 05:00:11.450595    8517 notify.go:220] Checking for updates...
	I1028 05:00:11.450792    8517 config.go:182] Loaded profile config "ha-586000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:00:11.450802    8517 status.go:174] checking status of ha-586000 ...
	I1028 05:00:11.451089    8517 status.go:371] ha-586000 host status = "Stopped" (err=<nil>)
	I1028 05:00:11.451093    8517 status.go:384] host is not running, skipping remaining checks
	I1028 05:00:11.451096    8517 status.go:176] ha-586000 status: &{Name:ha-586000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 05:00:11.452086    7452 retry.go:31] will retry after 5.842862735s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr: exit status 7 (79.293166ms)

                                                
                                                
-- stdout --
	ha-586000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:00:17.374509    8519 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:00:17.374725    8519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:00:17.374729    8519 out.go:358] Setting ErrFile to fd 2...
	I1028 05:00:17.374732    8519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:00:17.374879    8519 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:00:17.375044    8519 out.go:352] Setting JSON to false
	I1028 05:00:17.375056    8519 mustload.go:65] Loading cluster: ha-586000
	I1028 05:00:17.375095    8519 notify.go:220] Checking for updates...
	I1028 05:00:17.375330    8519 config.go:182] Loaded profile config "ha-586000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:00:17.375340    8519 status.go:174] checking status of ha-586000 ...
	I1028 05:00:17.375629    8519 status.go:371] ha-586000 host status = "Stopped" (err=<nil>)
	I1028 05:00:17.375634    8519 status.go:384] host is not running, skipping remaining checks
	I1028 05:00:17.375636    8519 status.go:176] ha-586000 status: &{Name:ha-586000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 05:00:17.376681    7452 retry.go:31] will retry after 19.675630171s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr: exit status 7 (81.74575ms)

                                                
                                                
-- stdout --
	ha-586000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:00:37.134597    8523 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:00:37.134805    8523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:00:37.134809    8523 out.go:358] Setting ErrFile to fd 2...
	I1028 05:00:37.134812    8523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:00:37.134988    8523 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:00:37.135124    8523 out.go:352] Setting JSON to false
	I1028 05:00:37.135136    8523 mustload.go:65] Loading cluster: ha-586000
	I1028 05:00:37.135166    8523 notify.go:220] Checking for updates...
	I1028 05:00:37.135374    8523 config.go:182] Loaded profile config "ha-586000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:00:37.135383    8523 status.go:174] checking status of ha-586000 ...
	I1028 05:00:37.135686    8523 status.go:371] ha-586000 host status = "Stopped" (err=<nil>)
	I1028 05:00:37.135690    8523 status.go:384] host is not running, skipping remaining checks
	I1028 05:00:37.135693    8523 status.go:176] ha-586000 status: &{Name:ha-586000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000: exit status 7 (35.939917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (50.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-586000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-586000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-586000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-586000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-586000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-586000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-586000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-586000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000: exit status 7 (34.727208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-586000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-586000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-586000 -v=7 --alsologtostderr: (1.860776208s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-586000 --wait=true -v=7 --alsologtostderr
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-586000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.236489542s)

                                                
                                                
-- stdout --
	* [ha-586000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-586000" primary control-plane node in "ha-586000" cluster
	* Restarting existing qemu2 VM for "ha-586000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-586000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:00:39.225937    8545 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:00:39.226126    8545 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:00:39.226130    8545 out.go:358] Setting ErrFile to fd 2...
	I1028 05:00:39.226133    8545 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:00:39.226320    8545 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:00:39.227543    8545 out.go:352] Setting JSON to false
	I1028 05:00:39.247692    8545 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5410,"bootTime":1730111429,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:00:39.247762    8545 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:00:39.252542    8545 out.go:177] * [ha-586000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:00:39.261378    8545 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:00:39.261402    8545 notify.go:220] Checking for updates...
	I1028 05:00:39.267507    8545 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:00:39.268893    8545 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:00:39.271484    8545 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:00:39.274503    8545 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:00:39.277533    8545 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:00:39.280874    8545 config.go:182] Loaded profile config "ha-586000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:00:39.280929    8545 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:00:39.285499    8545 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 05:00:39.292568    8545 start.go:297] selected driver: qemu2
	I1028 05:00:39.292586    8545 start.go:901] validating driver "qemu2" against &{Name:ha-586000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.2 ClusterName:ha-586000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:00:39.292643    8545 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:00:39.295157    8545 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 05:00:39.295183    8545 cni.go:84] Creating CNI manager for ""
	I1028 05:00:39.295208    8545 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 05:00:39.295259    8545 start.go:340] cluster config:
	{Name:ha-586000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-586000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:00:39.299676    8545 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:00:39.307476    8545 out.go:177] * Starting "ha-586000" primary control-plane node in "ha-586000" cluster
	I1028 05:00:39.311486    8545 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:00:39.311502    8545 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 05:00:39.311512    8545 cache.go:56] Caching tarball of preloaded images
	I1028 05:00:39.311603    8545 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:00:39.311608    8545 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 05:00:39.311664    8545 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/ha-586000/config.json ...
	I1028 05:00:39.312069    8545 start.go:360] acquireMachinesLock for ha-586000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:00:39.312116    8545 start.go:364] duration metric: took 41.584µs to acquireMachinesLock for "ha-586000"
	I1028 05:00:39.312125    8545 start.go:96] Skipping create...Using existing machine configuration
	I1028 05:00:39.312129    8545 fix.go:54] fixHost starting: 
	I1028 05:00:39.312241    8545 fix.go:112] recreateIfNeeded on ha-586000: state=Stopped err=<nil>
	W1028 05:00:39.312257    8545 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 05:00:39.320499    8545 out.go:177] * Restarting existing qemu2 VM for "ha-586000" ...
	I1028 05:00:39.324507    8545 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:00:39.324546    8545 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:dd:a6:88:ef:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/disk.qcow2
	I1028 05:00:39.326719    8545 main.go:141] libmachine: STDOUT: 
	I1028 05:00:39.326739    8545 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:00:39.326768    8545 fix.go:56] duration metric: took 14.63725ms for fixHost
	I1028 05:00:39.326774    8545 start.go:83] releasing machines lock for "ha-586000", held for 14.653166ms
	W1028 05:00:39.326779    8545 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:00:39.326822    8545 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:00:39.326826    8545 start.go:729] Will try again in 5 seconds ...
	I1028 05:00:44.329072    8545 start.go:360] acquireMachinesLock for ha-586000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:00:44.329536    8545 start.go:364] duration metric: took 397.458µs to acquireMachinesLock for "ha-586000"
	I1028 05:00:44.330021    8545 start.go:96] Skipping create...Using existing machine configuration
	I1028 05:00:44.330046    8545 fix.go:54] fixHost starting: 
	I1028 05:00:44.330804    8545 fix.go:112] recreateIfNeeded on ha-586000: state=Stopped err=<nil>
	W1028 05:00:44.330832    8545 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 05:00:44.335331    8545 out.go:177] * Restarting existing qemu2 VM for "ha-586000" ...
	I1028 05:00:44.346238    8545 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:00:44.346505    8545 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:dd:a6:88:ef:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/disk.qcow2
	I1028 05:00:44.356896    8545 main.go:141] libmachine: STDOUT: 
	I1028 05:00:44.356962    8545 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:00:44.357045    8545 fix.go:56] duration metric: took 27.002958ms for fixHost
	I1028 05:00:44.357074    8545 start.go:83] releasing machines lock for "ha-586000", held for 27.512875ms
	W1028 05:00:44.357277    8545 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-586000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-586000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:00:44.365236    8545 out.go:201] 
	W1028 05:00:44.369342    8545 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:00:44.369398    8545 out.go:270] * 
	* 
	W1028 05:00:44.372052    8545 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:00:44.379070    8545 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-586000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-586000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000: exit status 7 (36.733375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-darwin-arm64 -p ha-586000 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-586000 node delete m03 -v=7 --alsologtostderr: exit status 83 (44.766375ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-586000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-586000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:00:44.537323    8557 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:00:44.537774    8557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:00:44.537777    8557 out.go:358] Setting ErrFile to fd 2...
	I1028 05:00:44.537780    8557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:00:44.537940    8557 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:00:44.538164    8557 mustload.go:65] Loading cluster: ha-586000
	I1028 05:00:44.538390    8557 config.go:182] Loaded profile config "ha-586000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:00:44.543164    8557 out.go:177] * The control-plane node ha-586000 host is not running: state=Stopped
	I1028 05:00:44.546169    8557 out.go:177]   To start a cluster, run: "minikube start -p ha-586000"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-586000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:495: (dbg) Run:  out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr
ha_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr: exit status 7 (34.237459ms)

                                                
                                                
-- stdout --
	ha-586000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:00:44.582553    8559 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:00:44.582735    8559 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:00:44.582738    8559 out.go:358] Setting ErrFile to fd 2...
	I1028 05:00:44.582741    8559 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:00:44.582863    8559 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:00:44.582992    8559 out.go:352] Setting JSON to false
	I1028 05:00:44.583003    8559 mustload.go:65] Loading cluster: ha-586000
	I1028 05:00:44.583069    8559 notify.go:220] Checking for updates...
	I1028 05:00:44.583201    8559 config.go:182] Loaded profile config "ha-586000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:00:44.583210    8559 status.go:174] checking status of ha-586000 ...
	I1028 05:00:44.583471    8559 status.go:371] ha-586000 host status = "Stopped" (err=<nil>)
	I1028 05:00:44.583475    8559 status.go:384] host is not running, skipping remaining checks
	I1028 05:00:44.583477    8559 status.go:176] ha-586000 status: &{Name:ha-586000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000: exit status 7 (34.473459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-586000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-586000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-586000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-586000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000: exit status 7 (36.253083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-darwin-arm64 -p ha-586000 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-darwin-arm64 -p ha-586000 stop -v=7 --alsologtostderr: (3.534576959s)
ha_test.go:539: (dbg) Run:  out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr: exit status 7 (74.390625ms)

                                                
                                                
-- stdout --
	ha-586000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:00:48.314694    8586 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:00:48.314920    8586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:00:48.314924    8586 out.go:358] Setting ErrFile to fd 2...
	I1028 05:00:48.314927    8586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:00:48.315113    8586 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:00:48.315286    8586 out.go:352] Setting JSON to false
	I1028 05:00:48.315299    8586 mustload.go:65] Loading cluster: ha-586000
	I1028 05:00:48.315331    8586 notify.go:220] Checking for updates...
	I1028 05:00:48.315563    8586 config.go:182] Loaded profile config "ha-586000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:00:48.315573    8586 status.go:174] checking status of ha-586000 ...
	I1028 05:00:48.315880    8586 status.go:371] ha-586000 host status = "Stopped" (err=<nil>)
	I1028 05:00:48.315885    8586 status.go:384] host is not running, skipping remaining checks
	I1028 05:00:48.315888    8586 status.go:176] ha-586000 status: &{Name:ha-586000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr": ha-586000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr": ha-586000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-586000 status -v=7 --alsologtostderr": ha-586000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000: exit status 7 (36.107708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-586000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-586000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.196466291s)

                                                
                                                
-- stdout --
	* [ha-586000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-586000" primary control-plane node in "ha-586000" cluster
	* Restarting existing qemu2 VM for "ha-586000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-586000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:00:48.385406    8590 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:00:48.385576    8590 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:00:48.385579    8590 out.go:358] Setting ErrFile to fd 2...
	I1028 05:00:48.385582    8590 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:00:48.385707    8590 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:00:48.386770    8590 out.go:352] Setting JSON to false
	I1028 05:00:48.404389    8590 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5419,"bootTime":1730111429,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:00:48.404453    8590 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:00:48.408166    8590 out.go:177] * [ha-586000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:00:48.416093    8590 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:00:48.416151    8590 notify.go:220] Checking for updates...
	I1028 05:00:48.423106    8590 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:00:48.426003    8590 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:00:48.429059    8590 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:00:48.432070    8590 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:00:48.435131    8590 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:00:48.438366    8590 config.go:182] Loaded profile config "ha-586000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:00:48.438640    8590 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:00:48.443080    8590 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 05:00:48.450060    8590 start.go:297] selected driver: qemu2
	I1028 05:00:48.450067    8590 start.go:901] validating driver "qemu2" against &{Name:ha-586000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.2 ClusterName:ha-586000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:00:48.450131    8590 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:00:48.452751    8590 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 05:00:48.452776    8590 cni.go:84] Creating CNI manager for ""
	I1028 05:00:48.452795    8590 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 05:00:48.452832    8590 start.go:340] cluster config:
	{Name:ha-586000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-586000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:00:48.457361    8590 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:00:48.465070    8590 out.go:177] * Starting "ha-586000" primary control-plane node in "ha-586000" cluster
	I1028 05:00:48.469053    8590 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:00:48.469067    8590 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 05:00:48.469074    8590 cache.go:56] Caching tarball of preloaded images
	I1028 05:00:48.469150    8590 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:00:48.469156    8590 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 05:00:48.469207    8590 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/ha-586000/config.json ...
	I1028 05:00:48.469674    8590 start.go:360] acquireMachinesLock for ha-586000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:00:48.469705    8590 start.go:364] duration metric: took 25µs to acquireMachinesLock for "ha-586000"
	I1028 05:00:48.469714    8590 start.go:96] Skipping create...Using existing machine configuration
	I1028 05:00:48.469719    8590 fix.go:54] fixHost starting: 
	I1028 05:00:48.469850    8590 fix.go:112] recreateIfNeeded on ha-586000: state=Stopped err=<nil>
	W1028 05:00:48.469858    8590 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 05:00:48.478060    8590 out.go:177] * Restarting existing qemu2 VM for "ha-586000" ...
	I1028 05:00:48.481917    8590 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:00:48.481954    8590 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:dd:a6:88:ef:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/disk.qcow2
	I1028 05:00:48.484259    8590 main.go:141] libmachine: STDOUT: 
	I1028 05:00:48.484280    8590 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:00:48.484312    8590 fix.go:56] duration metric: took 14.592208ms for fixHost
	I1028 05:00:48.484318    8590 start.go:83] releasing machines lock for "ha-586000", held for 14.608209ms
	W1028 05:00:48.484324    8590 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:00:48.484358    8590 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:00:48.484363    8590 start.go:729] Will try again in 5 seconds ...
	I1028 05:00:53.486646    8590 start.go:360] acquireMachinesLock for ha-586000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:00:53.487173    8590 start.go:364] duration metric: took 409.625µs to acquireMachinesLock for "ha-586000"
	I1028 05:00:53.487318    8590 start.go:96] Skipping create...Using existing machine configuration
	I1028 05:00:53.487341    8590 fix.go:54] fixHost starting: 
	I1028 05:00:53.488099    8590 fix.go:112] recreateIfNeeded on ha-586000: state=Stopped err=<nil>
	W1028 05:00:53.488126    8590 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 05:00:53.496582    8590 out.go:177] * Restarting existing qemu2 VM for "ha-586000" ...
	I1028 05:00:53.501688    8590 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:00:53.501925    8590 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:dd:a6:88:ef:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/ha-586000/disk.qcow2
	I1028 05:00:53.512847    8590 main.go:141] libmachine: STDOUT: 
	I1028 05:00:53.512915    8590 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:00:53.513002    8590 fix.go:56] duration metric: took 25.666667ms for fixHost
	I1028 05:00:53.513021    8590 start.go:83] releasing machines lock for "ha-586000", held for 25.823542ms
	W1028 05:00:53.513205    8590 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-586000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-586000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:00:53.521615    8590 out.go:201] 
	W1028 05:00:53.524598    8590 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:00:53.524626    8590 out.go:270] * 
	* 
	W1028 05:00:53.526489    8590 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:00:53.536689    8590 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-586000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000: exit status 7 (75.54625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-586000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-586000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-586000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-586000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000: exit status 7 (35.318125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-586000 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-586000 --control-plane -v=7 --alsologtostderr: exit status 83 (45.320875ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-586000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-586000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:00:53.749944    8606 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:00:53.750152    8606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:00:53.750156    8606 out.go:358] Setting ErrFile to fd 2...
	I1028 05:00:53.750158    8606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:00:53.750282    8606 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:00:53.750525    8606 mustload.go:65] Loading cluster: ha-586000
	I1028 05:00:53.750730    8606 config.go:182] Loaded profile config "ha-586000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:00:53.753767    8606 out.go:177] * The control-plane node ha-586000 host is not running: state=Stopped
	I1028 05:00:53.757758    8606 out.go:177]   To start a cluster, run: "minikube start -p ha-586000"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-586000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000: exit status 7 (34.735958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-586000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-586000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-586000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-586000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-586000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-586000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-586000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-586000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-586000 -n ha-586000: exit status 7 (33.965ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-550000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-550000 --driver=qemu2 : exit status 80 (10.144589708s)

                                                
                                                
-- stdout --
	* [image-550000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-550000" primary control-plane node in "image-550000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-550000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-550000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-550000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-550000 -n image-550000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-550000 -n image-550000: exit status 7 (74.487625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-550000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.22s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.84s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-341000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-341000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.836844166s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8afc7bb2-0d58-4460-8a4c-9bfef077a715","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-341000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"72aa79f3-4f5b-4f99-a6b2-2bd4aeac2cd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19875"}}
	{"specversion":"1.0","id":"11985b1a-5c71-4c17-be8b-d00243cb1e78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig"}}
	{"specversion":"1.0","id":"176551f1-6385-456d-9dce-0b51ecbc512e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"df6fb681-e2b5-4ef7-9b10-513412c80472","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7873e1a2-ef56-4f6f-8c30-3b295b2de3d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube"}}
	{"specversion":"1.0","id":"7066beac-3530-45e9-8d1b-350803cc8eed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7a00e488-5f5d-43ce-8a3a-c4cad3e6f847","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"48fa069a-7014-4a09-9c39-fc4d911e62ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"02c2f72f-a1c7-4cef-b1a7-a65b4084a1b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-341000\" primary control-plane node in \"json-output-341000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f0fafd85-a53a-41c3-b812-b1dda7a7d592","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"fef0902b-ba48-487a-b772-f27aad47ccc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-341000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"4a170bf2-d9c5-41d9-941b-d91529442ab2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"b52a8e27-ee71-4690-b475-d3713ca7187d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"88bf8f63-b8a0-4bc7-bd72-1b08ffe4ad45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-341000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"a87013ec-0e04-43f2-ab9e-633120b8b5b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"3a8ad760-ea95-40ff-abb8-735eab459a57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-341000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.84s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-341000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-341000 --output=json --user=testUser: exit status 83 (84.084084ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fadb66b8-d8c7-471d-9359-e4d934fb4dd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-341000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"41f00216-1cf7-4ae8-b551-fa92bf9f428d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-341000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-341000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-341000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-341000 --output=json --user=testUser: exit status 83 (48.507208ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-341000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-341000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-341000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-341000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.25s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-973000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-973000 --driver=qemu2 : exit status 80 (9.932212583s)

                                                
                                                
-- stdout --
	* [first-973000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-973000" primary control-plane node in "first-973000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-973000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-973000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-973000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-10-28 05:01:27.961887 -0700 PDT m=+447.300255376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-975000 -n second-975000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-975000 -n second-975000: exit status 85 (85.066792ms)

                                                
                                                
-- stdout --
	* Profile "second-975000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-975000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-975000" host is not running, skipping log retrieval (state="* Profile \"second-975000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-975000\"")
helpers_test.go:175: Cleaning up "second-975000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-975000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-10-28 05:01:28.166154 -0700 PDT m=+447.504521876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-973000 -n first-973000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-973000 -n first-973000: exit status 7 (35.044875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-973000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-973000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-973000
--- FAIL: TestMinikubeProfile (10.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.26s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-093000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-093000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.189285125s)

                                                
                                                
-- stdout --
	* [mount-start-1-093000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-093000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-093000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-093000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-093000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-093000 -n mount-start-1-093000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-093000 -n mount-start-1-093000: exit status 7 (73.206625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-093000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-268000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-268000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.790220917s)

                                                
                                                
-- stdout --
	* [multinode-268000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-268000" primary control-plane node in "multinode-268000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-268000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:01:38.773329    8751 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:01:38.773496    8751 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:01:38.773499    8751 out.go:358] Setting ErrFile to fd 2...
	I1028 05:01:38.773501    8751 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:01:38.773647    8751 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:01:38.774805    8751 out.go:352] Setting JSON to false
	I1028 05:01:38.792535    8751 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5469,"bootTime":1730111429,"procs":517,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:01:38.792632    8751 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:01:38.798417    8751 out.go:177] * [multinode-268000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:01:38.806463    8751 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:01:38.806516    8751 notify.go:220] Checking for updates...
	I1028 05:01:38.814362    8751 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:01:38.817309    8751 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:01:38.820364    8751 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:01:38.824391    8751 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:01:38.827359    8751 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:01:38.830507    8751 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:01:38.834400    8751 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 05:01:38.841303    8751 start.go:297] selected driver: qemu2
	I1028 05:01:38.841309    8751 start.go:901] validating driver "qemu2" against <nil>
	I1028 05:01:38.841314    8751 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:01:38.843854    8751 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 05:01:38.846442    8751 out.go:177] * Automatically selected the socket_vmnet network
	I1028 05:01:38.847979    8751 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 05:01:38.847998    8751 cni.go:84] Creating CNI manager for ""
	I1028 05:01:38.848016    8751 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1028 05:01:38.848024    8751 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 05:01:38.848054    8751 start.go:340] cluster config:
	{Name:multinode-268000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-268000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:01:38.852771    8751 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:01:38.861468    8751 out.go:177] * Starting "multinode-268000" primary control-plane node in "multinode-268000" cluster
	I1028 05:01:38.865344    8751 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:01:38.865362    8751 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 05:01:38.865375    8751 cache.go:56] Caching tarball of preloaded images
	I1028 05:01:38.865463    8751 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:01:38.865468    8751 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 05:01:38.865702    8751 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/multinode-268000/config.json ...
	I1028 05:01:38.865714    8751 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/multinode-268000/config.json: {Name:mke666c77123e778d5d27a16ba7f8d7443e3e682 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:01:38.866103    8751 start.go:360] acquireMachinesLock for multinode-268000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:01:38.866154    8751 start.go:364] duration metric: took 45µs to acquireMachinesLock for "multinode-268000"
	I1028 05:01:38.866166    8751 start.go:93] Provisioning new machine with config: &{Name:multinode-268000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:multinode-268000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:01:38.866202    8751 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:01:38.874399    8751 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 05:01:38.892431    8751 start.go:159] libmachine.API.Create for "multinode-268000" (driver="qemu2")
	I1028 05:01:38.892468    8751 client.go:168] LocalClient.Create starting
	I1028 05:01:38.892540    8751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:01:38.892577    8751 main.go:141] libmachine: Decoding PEM data...
	I1028 05:01:38.892590    8751 main.go:141] libmachine: Parsing certificate...
	I1028 05:01:38.892629    8751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:01:38.892660    8751 main.go:141] libmachine: Decoding PEM data...
	I1028 05:01:38.892674    8751 main.go:141] libmachine: Parsing certificate...
	I1028 05:01:38.893092    8751 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:01:39.051766    8751 main.go:141] libmachine: Creating SSH key...
	I1028 05:01:39.104209    8751 main.go:141] libmachine: Creating Disk image...
	I1028 05:01:39.104214    8751 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:01:39.104396    8751 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/disk.qcow2
	I1028 05:01:39.114252    8751 main.go:141] libmachine: STDOUT: 
	I1028 05:01:39.114269    8751 main.go:141] libmachine: STDERR: 
	I1028 05:01:39.114332    8751 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/disk.qcow2 +20000M
	I1028 05:01:39.122862    8751 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:01:39.122883    8751 main.go:141] libmachine: STDERR: 
	I1028 05:01:39.122898    8751 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/disk.qcow2
	I1028 05:01:39.122902    8751 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:01:39.122914    8751 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:01:39.122951    8751 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:ad:f7:9c:92:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/disk.qcow2
	I1028 05:01:39.124710    8751 main.go:141] libmachine: STDOUT: 
	I1028 05:01:39.124725    8751 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:01:39.124745    8751 client.go:171] duration metric: took 232.270416ms to LocalClient.Create
	I1028 05:01:41.126925    8751 start.go:128] duration metric: took 2.260693208s to createHost
	I1028 05:01:41.126988    8751 start.go:83] releasing machines lock for "multinode-268000", held for 2.260815958s
	W1028 05:01:41.127071    8751 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:01:41.142279    8751 out.go:177] * Deleting "multinode-268000" in qemu2 ...
	W1028 05:01:41.170288    8751 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:01:41.170309    8751 start.go:729] Will try again in 5 seconds ...
	I1028 05:01:46.172567    8751 start.go:360] acquireMachinesLock for multinode-268000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:01:46.173149    8751 start.go:364] duration metric: took 465.667µs to acquireMachinesLock for "multinode-268000"
	I1028 05:01:46.173286    8751 start.go:93] Provisioning new machine with config: &{Name:multinode-268000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:multinode-268000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:01:46.173616    8751 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:01:46.185156    8751 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 05:01:46.233398    8751 start.go:159] libmachine.API.Create for "multinode-268000" (driver="qemu2")
	I1028 05:01:46.233446    8751 client.go:168] LocalClient.Create starting
	I1028 05:01:46.233575    8751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:01:46.233660    8751 main.go:141] libmachine: Decoding PEM data...
	I1028 05:01:46.233677    8751 main.go:141] libmachine: Parsing certificate...
	I1028 05:01:46.233763    8751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:01:46.233819    8751 main.go:141] libmachine: Decoding PEM data...
	I1028 05:01:46.233834    8751 main.go:141] libmachine: Parsing certificate...
	I1028 05:01:46.234409    8751 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:01:46.403560    8751 main.go:141] libmachine: Creating SSH key...
	I1028 05:01:46.463049    8751 main.go:141] libmachine: Creating Disk image...
	I1028 05:01:46.463057    8751 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:01:46.463243    8751 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/disk.qcow2
	I1028 05:01:46.473259    8751 main.go:141] libmachine: STDOUT: 
	I1028 05:01:46.473275    8751 main.go:141] libmachine: STDERR: 
	I1028 05:01:46.473336    8751 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/disk.qcow2 +20000M
	I1028 05:01:46.481931    8751 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:01:46.481950    8751 main.go:141] libmachine: STDERR: 
	I1028 05:01:46.481961    8751 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/disk.qcow2
	I1028 05:01:46.481975    8751 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:01:46.481984    8751 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:01:46.482013    8751 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:88:87:b9:60:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/disk.qcow2
	I1028 05:01:46.483848    8751 main.go:141] libmachine: STDOUT: 
	I1028 05:01:46.483863    8751 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:01:46.483876    8751 client.go:171] duration metric: took 250.423458ms to LocalClient.Create
	I1028 05:01:48.486054    8751 start.go:128] duration metric: took 2.312384083s to createHost
	I1028 05:01:48.486111    8751 start.go:83] releasing machines lock for "multinode-268000", held for 2.312902666s
	W1028 05:01:48.486519    8751 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-268000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-268000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:01:48.500286    8751 out.go:201] 
	W1028 05:01:48.503362    8751 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:01:48.503424    8751 out.go:270] * 
	* 
	W1028 05:01:48.506111    8751 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:01:48.515233    8751 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-268000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000: exit status 7 (72.998041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-268000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (69.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-268000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-268000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (64.866834ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-268000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-268000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-268000 -- rollout status deployment/busybox: exit status 1 (62.258917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-268000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-268000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-268000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (61.830667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-268000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 05:01:48.795513    7452 retry.go:31] will retry after 1.346833307s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-268000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-268000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.823792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-268000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 05:01:50.254575    7452 retry.go:31] will retry after 2.002385408s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-268000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-268000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.440583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-268000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 05:01:52.369876    7452 retry.go:31] will retry after 3.268525566s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-268000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-268000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.003708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-268000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 05:01:55.750775    7452 retry.go:31] will retry after 2.138388432s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-268000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-268000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.517125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-268000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 05:01:58.001016    7452 retry.go:31] will retry after 5.573437994s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-268000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-268000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.819792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-268000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 05:02:03.590857    7452 retry.go:31] will retry after 6.395448748s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-268000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-268000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.65325ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-268000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 05:02:10.099150    7452 retry.go:31] will retry after 14.801276544s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-268000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-268000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.490583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-268000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 05:02:25.011197    7452 retry.go:31] will retry after 15.573306586s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-268000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-268000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.785875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-268000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1028 05:02:40.695404    7452 retry.go:31] will retry after 17.306926035s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-268000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-268000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.988ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-268000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-268000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-268000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.392458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-268000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-268000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-268000 -- exec  -- nslookup kubernetes.io: exit status 1 (61.950334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-268000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-268000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-268000 -- exec  -- nslookup kubernetes.default: exit status 1 (61.469458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-268000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-268000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-268000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (61.933625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-268000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000: exit status 7 (35.225541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-268000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (69.89s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-268000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-268000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.013875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-268000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000: exit status 7 (34.7095ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-268000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-268000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-268000 -v 3 --alsologtostderr: exit status 83 (49.038458ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-268000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-268000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:02:58.528523    8829 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:02:58.528733    8829 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:02:58.528737    8829 out.go:358] Setting ErrFile to fd 2...
	I1028 05:02:58.528739    8829 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:02:58.528886    8829 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:02:58.529132    8829 mustload.go:65] Loading cluster: multinode-268000
	I1028 05:02:58.529360    8829 config.go:182] Loaded profile config "multinode-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:02:58.534413    8829 out.go:177] * The control-plane node multinode-268000 host is not running: state=Stopped
	I1028 05:02:58.539403    8829 out.go:177]   To start a cluster, run: "minikube start -p multinode-268000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-268000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000: exit status 7 (35.06275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-268000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-268000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-268000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.12525ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-268000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-268000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-268000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000: exit status 7 (35.14825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-268000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-268000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-268000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-268000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"multinode-268000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000: exit status 7 (34.60625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-268000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-268000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-268000 status --output json --alsologtostderr: exit status 7 (34.568917ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-268000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:02:58.761699    8841 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:02:58.761876    8841 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:02:58.761879    8841 out.go:358] Setting ErrFile to fd 2...
	I1028 05:02:58.761882    8841 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:02:58.762014    8841 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:02:58.762138    8841 out.go:352] Setting JSON to true
	I1028 05:02:58.762149    8841 mustload.go:65] Loading cluster: multinode-268000
	I1028 05:02:58.762214    8841 notify.go:220] Checking for updates...
	I1028 05:02:58.762353    8841 config.go:182] Loaded profile config "multinode-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:02:58.762361    8841 status.go:174] checking status of multinode-268000 ...
	I1028 05:02:58.762616    8841 status.go:371] multinode-268000 host status = "Stopped" (err=<nil>)
	I1028 05:02:58.762619    8841 status.go:384] host is not running, skipping remaining checks
	I1028 05:02:58.762621    8841 status.go:176] multinode-268000 status: &{Name:multinode-268000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-268000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000: exit status 7 (35.37975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-268000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-268000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-268000 node stop m03: exit status 85 (49.131292ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-268000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-268000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-268000 status: exit status 7 (33.61675ms)

                                                
                                                
-- stdout --
	multinode-268000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-268000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-268000 status --alsologtostderr: exit status 7 (34.552875ms)

                                                
                                                
-- stdout --
	multinode-268000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:02:58.915300    8850 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:02:58.915474    8850 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:02:58.915477    8850 out.go:358] Setting ErrFile to fd 2...
	I1028 05:02:58.915480    8850 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:02:58.915630    8850 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:02:58.915755    8850 out.go:352] Setting JSON to false
	I1028 05:02:58.915766    8850 mustload.go:65] Loading cluster: multinode-268000
	I1028 05:02:58.915837    8850 notify.go:220] Checking for updates...
	I1028 05:02:58.915988    8850 config.go:182] Loaded profile config "multinode-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:02:58.915996    8850 status.go:174] checking status of multinode-268000 ...
	I1028 05:02:58.916260    8850 status.go:371] multinode-268000 host status = "Stopped" (err=<nil>)
	I1028 05:02:58.916263    8850 status.go:384] host is not running, skipping remaining checks
	I1028 05:02:58.916265    8850 status.go:176] multinode-268000 status: &{Name:multinode-268000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-268000 status --alsologtostderr": multinode-268000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000: exit status 7 (34.235334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-268000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (55.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-268000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-268000 node start m03 -v=7 --alsologtostderr: exit status 85 (51.096292ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:02:58.985245    8854 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:02:58.985648    8854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:02:58.985652    8854 out.go:358] Setting ErrFile to fd 2...
	I1028 05:02:58.985654    8854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:02:58.985821    8854 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:02:58.986055    8854 mustload.go:65] Loading cluster: multinode-268000
	I1028 05:02:58.986276    8854 config.go:182] Loaded profile config "multinode-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:02:58.989090    8854 out.go:201] 
	W1028 05:02:58.991990    8854 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1028 05:02:58.991995    8854 out.go:270] * 
	* 
	W1028 05:02:58.993756    8854 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:02:58.998028    8854 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1028 05:02:58.985245    8854 out.go:345] Setting OutFile to fd 1 ...
I1028 05:02:58.985648    8854 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 05:02:58.985652    8854 out.go:358] Setting ErrFile to fd 2...
I1028 05:02:58.985654    8854 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 05:02:58.985821    8854 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
I1028 05:02:58.986055    8854 mustload.go:65] Loading cluster: multinode-268000
I1028 05:02:58.986276    8854 config.go:182] Loaded profile config "multinode-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 05:02:58.989090    8854 out.go:201] 
W1028 05:02:58.991990    8854 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1028 05:02:58.991995    8854 out.go:270] * 
* 
W1028 05:02:58.993756    8854 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1028 05:02:58.998028    8854 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-268000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-268000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-268000 status -v=7 --alsologtostderr: exit status 7 (35.168375ms)

                                                
                                                
-- stdout --
	multinode-268000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:02:59.036290    8856 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:02:59.036468    8856 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:02:59.036471    8856 out.go:358] Setting ErrFile to fd 2...
	I1028 05:02:59.036474    8856 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:02:59.036601    8856 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:02:59.036724    8856 out.go:352] Setting JSON to false
	I1028 05:02:59.036736    8856 mustload.go:65] Loading cluster: multinode-268000
	I1028 05:02:59.036804    8856 notify.go:220] Checking for updates...
	I1028 05:02:59.036963    8856 config.go:182] Loaded profile config "multinode-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:02:59.036971    8856 status.go:174] checking status of multinode-268000 ...
	I1028 05:02:59.037218    8856 status.go:371] multinode-268000 host status = "Stopped" (err=<nil>)
	I1028 05:02:59.037221    8856 status.go:384] host is not running, skipping remaining checks
	I1028 05:02:59.037223    8856 status.go:176] multinode-268000 status: &{Name:multinode-268000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 05:02:59.038106    7452 retry.go:31] will retry after 790.044966ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-268000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-268000 status -v=7 --alsologtostderr: exit status 7 (80.026958ms)

                                                
                                                
-- stdout --
	multinode-268000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:02:59.908438    8858 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:02:59.908649    8858 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:02:59.908653    8858 out.go:358] Setting ErrFile to fd 2...
	I1028 05:02:59.908656    8858 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:02:59.908810    8858 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:02:59.908962    8858 out.go:352] Setting JSON to false
	I1028 05:02:59.908975    8858 mustload.go:65] Loading cluster: multinode-268000
	I1028 05:02:59.909010    8858 notify.go:220] Checking for updates...
	I1028 05:02:59.909237    8858 config.go:182] Loaded profile config "multinode-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:02:59.909247    8858 status.go:174] checking status of multinode-268000 ...
	I1028 05:02:59.909544    8858 status.go:371] multinode-268000 host status = "Stopped" (err=<nil>)
	I1028 05:02:59.909548    8858 status.go:384] host is not running, skipping remaining checks
	I1028 05:02:59.909550    8858 status.go:176] multinode-268000 status: &{Name:multinode-268000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 05:02:59.910580    7452 retry.go:31] will retry after 891.031165ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-268000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-268000 status -v=7 --alsologtostderr: exit status 7 (80.360917ms)

                                                
                                                
-- stdout --
	multinode-268000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:03:00.882027    8860 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:03:00.882248    8860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:03:00.882259    8860 out.go:358] Setting ErrFile to fd 2...
	I1028 05:03:00.882262    8860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:03:00.882442    8860 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:03:00.882600    8860 out.go:352] Setting JSON to false
	I1028 05:03:00.882613    8860 mustload.go:65] Loading cluster: multinode-268000
	I1028 05:03:00.882652    8860 notify.go:220] Checking for updates...
	I1028 05:03:00.882865    8860 config.go:182] Loaded profile config "multinode-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:03:00.882875    8860 status.go:174] checking status of multinode-268000 ...
	I1028 05:03:00.883174    8860 status.go:371] multinode-268000 host status = "Stopped" (err=<nil>)
	I1028 05:03:00.883178    8860 status.go:384] host is not running, skipping remaining checks
	I1028 05:03:00.883180    8860 status.go:176] multinode-268000 status: &{Name:multinode-268000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 05:03:00.884320    7452 retry.go:31] will retry after 2.97498105s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-268000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-268000 status -v=7 --alsologtostderr: exit status 7 (80.428542ms)

                                                
                                                
-- stdout --
	multinode-268000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:03:03.939835    8862 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:03:03.940047    8862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:03:03.940051    8862 out.go:358] Setting ErrFile to fd 2...
	I1028 05:03:03.940054    8862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:03:03.940227    8862 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:03:03.940390    8862 out.go:352] Setting JSON to false
	I1028 05:03:03.940403    8862 mustload.go:65] Loading cluster: multinode-268000
	I1028 05:03:03.940442    8862 notify.go:220] Checking for updates...
	I1028 05:03:03.940655    8862 config.go:182] Loaded profile config "multinode-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:03:03.940664    8862 status.go:174] checking status of multinode-268000 ...
	I1028 05:03:03.940982    8862 status.go:371] multinode-268000 host status = "Stopped" (err=<nil>)
	I1028 05:03:03.940986    8862 status.go:384] host is not running, skipping remaining checks
	I1028 05:03:03.940989    8862 status.go:176] multinode-268000 status: &{Name:multinode-268000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 05:03:03.942024    7452 retry.go:31] will retry after 3.89889788s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-268000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-268000 status -v=7 --alsologtostderr: exit status 7 (79.164959ms)

                                                
                                                
-- stdout --
	multinode-268000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:03:07.920296    8864 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:03:07.920500    8864 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:03:07.920504    8864 out.go:358] Setting ErrFile to fd 2...
	I1028 05:03:07.920507    8864 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:03:07.920678    8864 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:03:07.920825    8864 out.go:352] Setting JSON to false
	I1028 05:03:07.920838    8864 mustload.go:65] Loading cluster: multinode-268000
	I1028 05:03:07.920876    8864 notify.go:220] Checking for updates...
	I1028 05:03:07.921091    8864 config.go:182] Loaded profile config "multinode-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:03:07.921101    8864 status.go:174] checking status of multinode-268000 ...
	I1028 05:03:07.921388    8864 status.go:371] multinode-268000 host status = "Stopped" (err=<nil>)
	I1028 05:03:07.921392    8864 status.go:384] host is not running, skipping remaining checks
	I1028 05:03:07.921394    8864 status.go:176] multinode-268000 status: &{Name:multinode-268000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 05:03:07.922489    7452 retry.go:31] will retry after 2.738222988s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-268000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-268000 status -v=7 --alsologtostderr: exit status 7 (81.335584ms)

                                                
                                                
-- stdout --
	multinode-268000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:03:10.742336    8867 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:03:10.742542    8867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:03:10.742546    8867 out.go:358] Setting ErrFile to fd 2...
	I1028 05:03:10.742549    8867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:03:10.742724    8867 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:03:10.742863    8867 out.go:352] Setting JSON to false
	I1028 05:03:10.742876    8867 mustload.go:65] Loading cluster: multinode-268000
	I1028 05:03:10.742906    8867 notify.go:220] Checking for updates...
	I1028 05:03:10.743129    8867 config.go:182] Loaded profile config "multinode-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:03:10.743138    8867 status.go:174] checking status of multinode-268000 ...
	I1028 05:03:10.743412    8867 status.go:371] multinode-268000 host status = "Stopped" (err=<nil>)
	I1028 05:03:10.743416    8867 status.go:384] host is not running, skipping remaining checks
	I1028 05:03:10.743418    8867 status.go:176] multinode-268000 status: &{Name:multinode-268000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 05:03:10.744424    7452 retry.go:31] will retry after 8.608882787s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-268000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-268000 status -v=7 --alsologtostderr: exit status 7 (80.258916ms)

                                                
                                                
-- stdout --
	multinode-268000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:03:19.433792    8874 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:03:19.433996    8874 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:03:19.434000    8874 out.go:358] Setting ErrFile to fd 2...
	I1028 05:03:19.434003    8874 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:03:19.434166    8874 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:03:19.434320    8874 out.go:352] Setting JSON to false
	I1028 05:03:19.434332    8874 mustload.go:65] Loading cluster: multinode-268000
	I1028 05:03:19.434368    8874 notify.go:220] Checking for updates...
	I1028 05:03:19.434573    8874 config.go:182] Loaded profile config "multinode-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:03:19.434583    8874 status.go:174] checking status of multinode-268000 ...
	I1028 05:03:19.434856    8874 status.go:371] multinode-268000 host status = "Stopped" (err=<nil>)
	I1028 05:03:19.434860    8874 status.go:384] host is not running, skipping remaining checks
	I1028 05:03:19.434863    8874 status.go:176] multinode-268000 status: &{Name:multinode-268000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 05:03:19.435896    7452 retry.go:31] will retry after 9.323907328s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-268000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-268000 status -v=7 --alsologtostderr: exit status 7 (80.707209ms)

                                                
                                                
-- stdout --
	multinode-268000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:03:28.840643    8877 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:03:28.840855    8877 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:03:28.840859    8877 out.go:358] Setting ErrFile to fd 2...
	I1028 05:03:28.840861    8877 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:03:28.841028    8877 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:03:28.841174    8877 out.go:352] Setting JSON to false
	I1028 05:03:28.841187    8877 mustload.go:65] Loading cluster: multinode-268000
	I1028 05:03:28.841231    8877 notify.go:220] Checking for updates...
	I1028 05:03:28.841432    8877 config.go:182] Loaded profile config "multinode-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:03:28.841441    8877 status.go:174] checking status of multinode-268000 ...
	I1028 05:03:28.841738    8877 status.go:371] multinode-268000 host status = "Stopped" (err=<nil>)
	I1028 05:03:28.841743    8877 status.go:384] host is not running, skipping remaining checks
	I1028 05:03:28.841745    8877 status.go:176] multinode-268000 status: &{Name:multinode-268000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1028 05:03:28.842767    7452 retry.go:31] will retry after 25.032938744s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-268000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-268000 status -v=7 --alsologtostderr: exit status 7 (79.189041ms)

                                                
                                                
-- stdout --
	multinode-268000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:03:53.954745    8881 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:03:53.954980    8881 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:03:53.954984    8881 out.go:358] Setting ErrFile to fd 2...
	I1028 05:03:53.954987    8881 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:03:53.955140    8881 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:03:53.955297    8881 out.go:352] Setting JSON to false
	I1028 05:03:53.955308    8881 mustload.go:65] Loading cluster: multinode-268000
	I1028 05:03:53.955345    8881 notify.go:220] Checking for updates...
	I1028 05:03:53.955567    8881 config.go:182] Loaded profile config "multinode-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:03:53.955576    8881 status.go:174] checking status of multinode-268000 ...
	I1028 05:03:53.955863    8881 status.go:371] multinode-268000 host status = "Stopped" (err=<nil>)
	I1028 05:03:53.955867    8881 status.go:384] host is not running, skipping remaining checks
	I1028 05:03:53.955869    8881 status.go:176] multinode-268000 status: &{Name:multinode-268000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-268000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000: exit status 7 (36.763209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-268000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (55.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-268000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-268000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-268000: (2.009212167s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-268000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-268000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.229975s)

                                                
                                                
-- stdout --
	* [multinode-268000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-268000" primary control-plane node in "multinode-268000" cluster
	* Restarting existing qemu2 VM for "multinode-268000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-268000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:03:56.106077    8897 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:03:56.106260    8897 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:03:56.106264    8897 out.go:358] Setting ErrFile to fd 2...
	I1028 05:03:56.106267    8897 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:03:56.106420    8897 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:03:56.107717    8897 out.go:352] Setting JSON to false
	I1028 05:03:56.127352    8897 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5607,"bootTime":1730111429,"procs":515,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:03:56.127421    8897 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:03:56.131904    8897 out.go:177] * [multinode-268000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:03:56.139730    8897 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:03:56.139815    8897 notify.go:220] Checking for updates...
	I1028 05:03:56.143648    8897 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:03:56.146724    8897 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:03:56.149643    8897 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:03:56.152653    8897 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:03:56.155686    8897 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:03:56.159113    8897 config.go:182] Loaded profile config "multinode-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:03:56.159170    8897 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:03:56.163643    8897 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 05:03:56.170705    8897 start.go:297] selected driver: qemu2
	I1028 05:03:56.170712    8897 start.go:901] validating driver "qemu2" against &{Name:multinode-268000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:multinode-268000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:03:56.170781    8897 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:03:56.173360    8897 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 05:03:56.173388    8897 cni.go:84] Creating CNI manager for ""
	I1028 05:03:56.173410    8897 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 05:03:56.173456    8897 start.go:340] cluster config:
	{Name:multinode-268000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-268000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:03:56.177960    8897 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:03:56.186662    8897 out.go:177] * Starting "multinode-268000" primary control-plane node in "multinode-268000" cluster
	I1028 05:03:56.190699    8897 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:03:56.190719    8897 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 05:03:56.190727    8897 cache.go:56] Caching tarball of preloaded images
	I1028 05:03:56.190812    8897 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:03:56.190823    8897 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 05:03:56.190866    8897 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/multinode-268000/config.json ...
	I1028 05:03:56.191302    8897 start.go:360] acquireMachinesLock for multinode-268000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:03:56.191353    8897 start.go:364] duration metric: took 43.5µs to acquireMachinesLock for "multinode-268000"
	I1028 05:03:56.191361    8897 start.go:96] Skipping create...Using existing machine configuration
	I1028 05:03:56.191365    8897 fix.go:54] fixHost starting: 
	I1028 05:03:56.191478    8897 fix.go:112] recreateIfNeeded on multinode-268000: state=Stopped err=<nil>
	W1028 05:03:56.191487    8897 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 05:03:56.198569    8897 out.go:177] * Restarting existing qemu2 VM for "multinode-268000" ...
	I1028 05:03:56.202661    8897 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:03:56.202709    8897 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:88:87:b9:60:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/disk.qcow2
	I1028 05:03:56.204880    8897 main.go:141] libmachine: STDOUT: 
	I1028 05:03:56.204898    8897 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:03:56.204927    8897 fix.go:56] duration metric: took 13.560084ms for fixHost
	I1028 05:03:56.204931    8897 start.go:83] releasing machines lock for "multinode-268000", held for 13.5745ms
	W1028 05:03:56.204938    8897 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:03:56.204989    8897 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:03:56.204993    8897 start.go:729] Will try again in 5 seconds ...
	I1028 05:04:01.207092    8897 start.go:360] acquireMachinesLock for multinode-268000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:04:01.207483    8897 start.go:364] duration metric: took 315.583µs to acquireMachinesLock for "multinode-268000"
	I1028 05:04:01.207602    8897 start.go:96] Skipping create...Using existing machine configuration
	I1028 05:04:01.207618    8897 fix.go:54] fixHost starting: 
	I1028 05:04:01.208303    8897 fix.go:112] recreateIfNeeded on multinode-268000: state=Stopped err=<nil>
	W1028 05:04:01.208328    8897 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 05:04:01.216806    8897 out.go:177] * Restarting existing qemu2 VM for "multinode-268000" ...
	I1028 05:04:01.220737    8897 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:04:01.220979    8897 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:88:87:b9:60:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/disk.qcow2
	I1028 05:04:01.230818    8897 main.go:141] libmachine: STDOUT: 
	I1028 05:04:01.230869    8897 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:04:01.230928    8897 fix.go:56] duration metric: took 23.312458ms for fixHost
	I1028 05:04:01.230944    8897 start.go:83] releasing machines lock for "multinode-268000", held for 23.441291ms
	W1028 05:04:01.231116    8897 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-268000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-268000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:04:01.239750    8897 out.go:201] 
	W1028 05:04:01.243830    8897 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:04:01.243867    8897 out.go:270] * 
	* 
	W1028 05:04:01.246352    8897 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:04:01.254807    8897 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-268000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-268000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000: exit status 7 (36.476375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-268000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-268000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-268000 node delete m03: exit status 83 (44.076125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-268000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-268000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-268000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-268000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-268000 status --alsologtostderr: exit status 7 (35.597125ms)

                                                
                                                
-- stdout --
	multinode-268000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:04:01.459490    8911 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:04:01.459655    8911 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:04:01.459659    8911 out.go:358] Setting ErrFile to fd 2...
	I1028 05:04:01.459661    8911 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:04:01.459805    8911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:04:01.459936    8911 out.go:352] Setting JSON to false
	I1028 05:04:01.459947    8911 mustload.go:65] Loading cluster: multinode-268000
	I1028 05:04:01.460011    8911 notify.go:220] Checking for updates...
	I1028 05:04:01.460162    8911 config.go:182] Loaded profile config "multinode-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:04:01.460171    8911 status.go:174] checking status of multinode-268000 ...
	I1028 05:04:01.460425    8911 status.go:371] multinode-268000 host status = "Stopped" (err=<nil>)
	I1028 05:04:01.460429    8911 status.go:384] host is not running, skipping remaining checks
	I1028 05:04:01.460431    8911 status.go:176] multinode-268000 status: &{Name:multinode-268000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-268000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000: exit status 7 (35.198417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-268000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-268000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-268000 stop: (2.087051167s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-268000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-268000 status: exit status 7 (70.083792ms)

                                                
                                                
-- stdout --
	multinode-268000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-268000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-268000 status --alsologtostderr: exit status 7 (35.206084ms)

                                                
                                                
-- stdout --
	multinode-268000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:04:03.687616    8927 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:04:03.687805    8927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:04:03.687808    8927 out.go:358] Setting ErrFile to fd 2...
	I1028 05:04:03.687810    8927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:04:03.687942    8927 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:04:03.688067    8927 out.go:352] Setting JSON to false
	I1028 05:04:03.688078    8927 mustload.go:65] Loading cluster: multinode-268000
	I1028 05:04:03.688132    8927 notify.go:220] Checking for updates...
	I1028 05:04:03.688293    8927 config.go:182] Loaded profile config "multinode-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:04:03.688301    8927 status.go:174] checking status of multinode-268000 ...
	I1028 05:04:03.688550    8927 status.go:371] multinode-268000 host status = "Stopped" (err=<nil>)
	I1028 05:04:03.688553    8927 status.go:384] host is not running, skipping remaining checks
	I1028 05:04:03.688556    8927 status.go:176] multinode-268000 status: &{Name:multinode-268000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-268000 status --alsologtostderr": multinode-268000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-268000 status --alsologtostderr": multinode-268000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000: exit status 7 (35.044792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-268000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-268000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-268000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.186847292s)

                                                
                                                
-- stdout --
	* [multinode-268000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-268000" primary control-plane node in "multinode-268000" cluster
	* Restarting existing qemu2 VM for "multinode-268000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-268000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:04:03.757200    8931 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:04:03.757371    8931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:04:03.757375    8931 out.go:358] Setting ErrFile to fd 2...
	I1028 05:04:03.757377    8931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:04:03.757498    8931 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:04:03.758571    8931 out.go:352] Setting JSON to false
	I1028 05:04:03.776208    8931 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5614,"bootTime":1730111429,"procs":515,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:04:03.776289    8931 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:04:03.780144    8931 out.go:177] * [multinode-268000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:04:03.788118    8931 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:04:03.788184    8931 notify.go:220] Checking for updates...
	I1028 05:04:03.794045    8931 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:04:03.797103    8931 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:04:03.800085    8931 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:04:03.803113    8931 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:04:03.806116    8931 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:04:03.809365    8931 config.go:182] Loaded profile config "multinode-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:04:03.809648    8931 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:04:03.814017    8931 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 05:04:03.820055    8931 start.go:297] selected driver: qemu2
	I1028 05:04:03.820062    8931 start.go:901] validating driver "qemu2" against &{Name:multinode-268000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:multinode-268000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:04:03.820127    8931 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:04:03.822528    8931 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 05:04:03.822558    8931 cni.go:84] Creating CNI manager for ""
	I1028 05:04:03.822583    8931 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 05:04:03.822632    8931 start.go:340] cluster config:
	{Name:multinode-268000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-268000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:04:03.827224    8931 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:04:03.835104    8931 out.go:177] * Starting "multinode-268000" primary control-plane node in "multinode-268000" cluster
	I1028 05:04:03.839074    8931 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:04:03.839090    8931 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 05:04:03.839097    8931 cache.go:56] Caching tarball of preloaded images
	I1028 05:04:03.839156    8931 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:04:03.839161    8931 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 05:04:03.839222    8931 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/multinode-268000/config.json ...
	I1028 05:04:03.839654    8931 start.go:360] acquireMachinesLock for multinode-268000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:04:03.839683    8931 start.go:364] duration metric: took 23.459µs to acquireMachinesLock for "multinode-268000"
	I1028 05:04:03.839692    8931 start.go:96] Skipping create...Using existing machine configuration
	I1028 05:04:03.839697    8931 fix.go:54] fixHost starting: 
	I1028 05:04:03.839812    8931 fix.go:112] recreateIfNeeded on multinode-268000: state=Stopped err=<nil>
	W1028 05:04:03.839820    8931 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 05:04:03.848011    8931 out.go:177] * Restarting existing qemu2 VM for "multinode-268000" ...
	I1028 05:04:03.852051    8931 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:04:03.852086    8931 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:88:87:b9:60:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/disk.qcow2
	I1028 05:04:03.854252    8931 main.go:141] libmachine: STDOUT: 
	I1028 05:04:03.854272    8931 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:04:03.854301    8931 fix.go:56] duration metric: took 14.603625ms for fixHost
	I1028 05:04:03.854306    8931 start.go:83] releasing machines lock for "multinode-268000", held for 14.6185ms
	W1028 05:04:03.854311    8931 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:04:03.854340    8931 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:04:03.854344    8931 start.go:729] Will try again in 5 seconds ...
	I1028 05:04:08.856418    8931 start.go:360] acquireMachinesLock for multinode-268000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:04:08.856853    8931 start.go:364] duration metric: took 331.959µs to acquireMachinesLock for "multinode-268000"
	I1028 05:04:08.856977    8931 start.go:96] Skipping create...Using existing machine configuration
	I1028 05:04:08.856996    8931 fix.go:54] fixHost starting: 
	I1028 05:04:08.857631    8931 fix.go:112] recreateIfNeeded on multinode-268000: state=Stopped err=<nil>
	W1028 05:04:08.857658    8931 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 05:04:08.862063    8931 out.go:177] * Restarting existing qemu2 VM for "multinode-268000" ...
	I1028 05:04:08.865974    8931 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:04:08.866169    8931 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:88:87:b9:60:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/multinode-268000/disk.qcow2
	I1028 05:04:08.875780    8931 main.go:141] libmachine: STDOUT: 
	I1028 05:04:08.875842    8931 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:04:08.875919    8931 fix.go:56] duration metric: took 18.920708ms for fixHost
	I1028 05:04:08.875973    8931 start.go:83] releasing machines lock for "multinode-268000", held for 19.060375ms
	W1028 05:04:08.876160    8931 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-268000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-268000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:04:08.883978    8931 out.go:201] 
	W1028 05:04:08.888017    8931 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:04:08.888042    8931 out.go:270] * 
	* 
	W1028 05:04:08.890720    8931 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:04:08.898022    8931 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-268000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000: exit status 7 (75.03825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-268000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-268000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-268000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-268000-m01 --driver=qemu2 : exit status 80 (9.88479225s)

                                                
                                                
-- stdout --
	* [multinode-268000-m01] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-268000-m01" primary control-plane node in "multinode-268000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-268000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-268000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-268000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-268000-m02 --driver=qemu2 : exit status 80 (10.000822375s)

                                                
                                                
-- stdout --
	* [multinode-268000-m02] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-268000-m02" primary control-plane node in "multinode-268000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-268000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-268000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-268000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-268000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-268000: exit status 83 (87.528291ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-268000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-268000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-268000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-268000 -n multinode-268000: exit status 7 (35.905083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-268000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.13s)

                                                
                                    
x
+
TestPreload (10.25s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-246000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-246000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.08894875s)

                                                
                                                
-- stdout --
	* [test-preload-246000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-246000" primary control-plane node in "test-preload-246000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-246000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:04:29.265301    8985 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:04:29.265471    8985 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:04:29.265475    8985 out.go:358] Setting ErrFile to fd 2...
	I1028 05:04:29.265477    8985 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:04:29.265594    8985 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:04:29.266712    8985 out.go:352] Setting JSON to false
	I1028 05:04:29.284345    8985 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5640,"bootTime":1730111429,"procs":515,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:04:29.284420    8985 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:04:29.289427    8985 out.go:177] * [test-preload-246000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:04:29.300403    8985 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:04:29.300464    8985 notify.go:220] Checking for updates...
	I1028 05:04:29.305819    8985 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:04:29.309268    8985 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:04:29.312329    8985 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:04:29.315270    8985 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:04:29.318210    8985 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:04:29.321661    8985 config.go:182] Loaded profile config "multinode-268000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:04:29.321715    8985 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:04:29.326244    8985 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 05:04:29.333261    8985 start.go:297] selected driver: qemu2
	I1028 05:04:29.333270    8985 start.go:901] validating driver "qemu2" against <nil>
	I1028 05:04:29.333277    8985 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:04:29.335868    8985 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 05:04:29.339331    8985 out.go:177] * Automatically selected the socket_vmnet network
	I1028 05:04:29.342281    8985 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 05:04:29.342299    8985 cni.go:84] Creating CNI manager for ""
	I1028 05:04:29.342321    8985 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 05:04:29.342325    8985 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 05:04:29.342358    8985 start.go:340] cluster config:
	{Name:test-preload-246000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-246000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:04:29.347060    8985 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:04:29.354257    8985 out.go:177] * Starting "test-preload-246000" primary control-plane node in "test-preload-246000" cluster
	I1028 05:04:29.358228    8985 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1028 05:04:29.358317    8985 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/test-preload-246000/config.json ...
	I1028 05:04:29.358344    8985 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/test-preload-246000/config.json: {Name:mk4ce16346dfff3f9d668aa08c4d22a6c288b584 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:04:29.358347    8985 cache.go:107] acquiring lock: {Name:mkcfd3d64864e0711c1976baca0154e406e7e0ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:04:29.358356    8985 cache.go:107] acquiring lock: {Name:mk1a90be8c3bab33e5c45d3a8d8f271f19ce1a6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:04:29.358353    8985 cache.go:107] acquiring lock: {Name:mkf355d35e06a26a60b7f46248befd77b1ce10b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:04:29.358459    8985 cache.go:107] acquiring lock: {Name:mk7b923d41b73359fe78cf03d2c1573b7f943ee0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:04:29.358561    8985 cache.go:107] acquiring lock: {Name:mk2f10ad97e67e986684c7397a93923d468e8bd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:04:29.358583    8985 cache.go:107] acquiring lock: {Name:mkdb2b5d2e964a837f1d2d9553d8aeeda52e2d4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:04:29.358586    8985 cache.go:107] acquiring lock: {Name:mk925eaaf8374e028ac5b8beebce7d91ef0cdba4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:04:29.358630    8985 cache.go:107] acquiring lock: {Name:mk6ee56cb86222d603335137de4b7a0bae3d2a26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:04:29.358770    8985 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1028 05:04:29.358792    8985 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1028 05:04:29.358779    8985 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1028 05:04:29.358965    8985 start.go:360] acquireMachinesLock for test-preload-246000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:04:29.359012    8985 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 05:04:29.359034    8985 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1028 05:04:29.359083    8985 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1028 05:04:29.359121    8985 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 05:04:29.359134    8985 start.go:364] duration metric: took 151.959µs to acquireMachinesLock for "test-preload-246000"
	I1028 05:04:29.359150    8985 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1028 05:04:29.359148    8985 start.go:93] Provisioning new machine with config: &{Name:test-preload-246000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-246000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:04:29.359205    8985 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:04:29.367237    8985 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 05:04:29.372460    8985 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1028 05:04:29.372546    8985 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 05:04:29.372588    8985 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 05:04:29.372951    8985 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1028 05:04:29.374619    8985 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1028 05:04:29.374644    8985 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1028 05:04:29.374676    8985 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1028 05:04:29.374700    8985 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1028 05:04:29.385795    8985 start.go:159] libmachine.API.Create for "test-preload-246000" (driver="qemu2")
	I1028 05:04:29.385817    8985 client.go:168] LocalClient.Create starting
	I1028 05:04:29.385899    8985 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:04:29.385937    8985 main.go:141] libmachine: Decoding PEM data...
	I1028 05:04:29.385949    8985 main.go:141] libmachine: Parsing certificate...
	I1028 05:04:29.385989    8985 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:04:29.386020    8985 main.go:141] libmachine: Decoding PEM data...
	I1028 05:04:29.386031    8985 main.go:141] libmachine: Parsing certificate...
	I1028 05:04:29.386400    8985 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:04:29.575867    8985 main.go:141] libmachine: Creating SSH key...
	I1028 05:04:29.629143    8985 main.go:141] libmachine: Creating Disk image...
	I1028 05:04:29.629159    8985 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:04:29.629354    8985 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/test-preload-246000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/test-preload-246000/disk.qcow2
	I1028 05:04:29.638914    8985 main.go:141] libmachine: STDOUT: 
	I1028 05:04:29.638928    8985 main.go:141] libmachine: STDERR: 
	I1028 05:04:29.638973    8985 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/test-preload-246000/disk.qcow2 +20000M
	I1028 05:04:29.648038    8985 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:04:29.648055    8985 main.go:141] libmachine: STDERR: 
	I1028 05:04:29.648068    8985 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/test-preload-246000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/test-preload-246000/disk.qcow2
	I1028 05:04:29.648073    8985 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:04:29.648087    8985 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:04:29.648112    8985 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/test-preload-246000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/test-preload-246000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/test-preload-246000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:d0:8b:96:93:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/test-preload-246000/disk.qcow2
	I1028 05:04:29.650390    8985 main.go:141] libmachine: STDOUT: 
	I1028 05:04:29.650410    8985 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:04:29.650431    8985 client.go:171] duration metric: took 264.614083ms to LocalClient.Create
	I1028 05:04:29.844786    8985 cache.go:162] opening:  /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W1028 05:04:29.854279    8985 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1028 05:04:29.854297    8985 cache.go:162] opening:  /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1028 05:04:29.896633    8985 cache.go:162] opening:  /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1028 05:04:30.012134    8985 cache.go:162] opening:  /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1028 05:04:30.049791    8985 cache.go:162] opening:  /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I1028 05:04:30.062383    8985 cache.go:157] /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1028 05:04:30.062398    8985 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 703.979458ms
	I1028 05:04:30.062408    8985 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I1028 05:04:30.149387    8985 cache.go:162] opening:  /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I1028 05:04:30.168676    8985 cache.go:162] opening:  /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W1028 05:04:30.434514    8985 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1028 05:04:30.434621    8985 cache.go:162] opening:  /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1028 05:04:30.889038    8985 cache.go:157] /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1028 05:04:30.889083    8985 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.530760083s
	I1028 05:04:30.889110    8985 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1028 05:04:31.650641    8985 start.go:128] duration metric: took 2.291459833s to createHost
	I1028 05:04:31.650695    8985 start.go:83] releasing machines lock for "test-preload-246000", held for 2.291601959s
	W1028 05:04:31.650769    8985 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:04:31.669270    8985 out.go:177] * Deleting "test-preload-246000" in qemu2 ...
	W1028 05:04:31.702283    8985 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:04:31.702314    8985 start.go:729] Will try again in 5 seconds ...
	I1028 05:04:31.981533    8985 cache.go:157] /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1028 05:04:31.981582    8985 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.623291166s
	I1028 05:04:31.981606    8985 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1028 05:04:32.784474    8985 cache.go:157] /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1028 05:04:32.784544    8985 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.426065042s
	I1028 05:04:32.784569    8985 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1028 05:04:34.108383    8985 cache.go:157] /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1028 05:04:34.108431    8985 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.750185583s
	I1028 05:04:34.108482    8985 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1028 05:04:36.234557    8985 cache.go:157] /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1028 05:04:36.234614    8985 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 6.876272833s
	I1028 05:04:36.234642    8985 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1028 05:04:36.702532    8985 start.go:360] acquireMachinesLock for test-preload-246000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:04:36.703049    8985 start.go:364] duration metric: took 441.5µs to acquireMachinesLock for "test-preload-246000"
	I1028 05:04:36.703175    8985 start.go:93] Provisioning new machine with config: &{Name:test-preload-246000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-246000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:04:36.703517    8985 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:04:36.710219    8985 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 05:04:36.760075    8985 start.go:159] libmachine.API.Create for "test-preload-246000" (driver="qemu2")
	I1028 05:04:36.760136    8985 client.go:168] LocalClient.Create starting
	I1028 05:04:36.760264    8985 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:04:36.760341    8985 main.go:141] libmachine: Decoding PEM data...
	I1028 05:04:36.760357    8985 main.go:141] libmachine: Parsing certificate...
	I1028 05:04:36.760424    8985 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:04:36.760487    8985 main.go:141] libmachine: Decoding PEM data...
	I1028 05:04:36.760500    8985 main.go:141] libmachine: Parsing certificate...
	I1028 05:04:36.761076    8985 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:04:36.880266    8985 cache.go:157] /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1028 05:04:36.880287    8985 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 7.521894583s
	I1028 05:04:36.880304    8985 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1028 05:04:36.934135    8985 main.go:141] libmachine: Creating SSH key...
	I1028 05:04:37.252011    8985 main.go:141] libmachine: Creating Disk image...
	I1028 05:04:37.252031    8985 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:04:37.252293    8985 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/test-preload-246000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/test-preload-246000/disk.qcow2
	I1028 05:04:37.262950    8985 main.go:141] libmachine: STDOUT: 
	I1028 05:04:37.262973    8985 main.go:141] libmachine: STDERR: 
	I1028 05:04:37.263054    8985 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/test-preload-246000/disk.qcow2 +20000M
	I1028 05:04:37.271949    8985 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:04:37.271968    8985 main.go:141] libmachine: STDERR: 
	I1028 05:04:37.271982    8985 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/test-preload-246000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/test-preload-246000/disk.qcow2
	I1028 05:04:37.271987    8985 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:04:37.272003    8985 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:04:37.272039    8985 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/test-preload-246000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/test-preload-246000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/test-preload-246000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:7c:c5:67:a5:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/test-preload-246000/disk.qcow2
	I1028 05:04:37.273940    8985 main.go:141] libmachine: STDOUT: 
	I1028 05:04:37.273952    8985 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:04:37.273966    8985 client.go:171] duration metric: took 513.836625ms to LocalClient.Create
	I1028 05:04:39.275011    8985 start.go:128] duration metric: took 2.571494666s to createHost
	I1028 05:04:39.275080    8985 start.go:83] releasing machines lock for "test-preload-246000", held for 2.572062917s
	W1028 05:04:39.275440    8985 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-246000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-246000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:04:39.284816    8985 out.go:201] 
	W1028 05:04:39.293839    8985 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:04:39.293866    8985 out.go:270] * 
	* 
	W1028 05:04:39.296566    8985 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:04:39.305787    8985 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-246000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-10-28 05:04:39.32361 -0700 PDT m=+638.761197626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-246000 -n test-preload-246000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-246000 -n test-preload-246000: exit status 7 (73.873792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-246000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-246000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-246000
--- FAIL: TestPreload (10.25s)

                                                
                                    
x
+
TestScheduledStopUnix (9.95s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-539000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-539000 --memory=2048 --driver=qemu2 : exit status 80 (9.783574583s)

                                                
                                                
-- stdout --
	* [scheduled-stop-539000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-539000" primary control-plane node in "scheduled-stop-539000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-539000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-539000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-539000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-539000" primary control-plane node in "scheduled-stop-539000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-539000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-539000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-10-28 05:04:49.267616 -0700 PDT m=+648.705421126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-539000 -n scheduled-stop-539000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-539000 -n scheduled-stop-539000: exit status 7 (76.624ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-539000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-539000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-539000
--- FAIL: TestScheduledStopUnix (9.95s)

                                                
                                    
x
+
TestSkaffold (12.29s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3902541046 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3902541046 version: (1.017515916s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-458000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-458000 --memory=2600 --driver=qemu2 : exit status 80 (9.799317125s)

                                                
                                                
-- stdout --
	* [skaffold-458000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-458000" primary control-plane node in "skaffold-458000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-458000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-458000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-458000" primary control-plane node in "skaffold-458000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-458000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-10-28 05:05:01.566927 -0700 PDT m=+661.005000501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-458000 -n skaffold-458000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-458000 -n skaffold-458000: exit status 7 (68.50875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-458000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-458000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-458000
--- FAIL: TestSkaffold (12.29s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (586.47s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1714137186 start -p running-upgrade-581000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1714137186 start -p running-upgrade-581000 --memory=2200 --vm-driver=qemu2 : (50.72265925s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-581000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-581000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m22.347467042s)

                                                
                                                
-- stdout --
	* [running-upgrade-581000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-581000" primary control-plane node in "running-upgrade-581000" cluster
	* Updating the running qemu2 "running-upgrade-581000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:06:34.667872    9360 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:06:34.668207    9360 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:06:34.668211    9360 out.go:358] Setting ErrFile to fd 2...
	I1028 05:06:34.668213    9360 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:06:34.668354    9360 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:06:34.669387    9360 out.go:352] Setting JSON to false
	I1028 05:06:34.688506    9360 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5765,"bootTime":1730111429,"procs":519,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:06:34.688573    9360 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:06:34.691408    9360 out.go:177] * [running-upgrade-581000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:06:34.698721    9360 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:06:34.698780    9360 notify.go:220] Checking for updates...
	I1028 05:06:34.707614    9360 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:06:34.711487    9360 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:06:34.714665    9360 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:06:34.717639    9360 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:06:34.720648    9360 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:06:34.723840    9360 config.go:182] Loaded profile config "running-upgrade-581000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 05:06:34.726592    9360 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1028 05:06:34.729690    9360 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:06:34.733609    9360 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 05:06:34.740676    9360 start.go:297] selected driver: qemu2
	I1028 05:06:34.740682    9360 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:58030 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1028 05:06:34.740740    9360 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:06:34.743478    9360 cni.go:84] Creating CNI manager for ""
	I1028 05:06:34.743513    9360 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 05:06:34.743549    9360 start.go:340] cluster config:
	{Name:running-upgrade-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:58030 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1028 05:06:34.743603    9360 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:06:34.750633    9360 out.go:177] * Starting "running-upgrade-581000" primary control-plane node in "running-upgrade-581000" cluster
	I1028 05:06:34.754573    9360 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1028 05:06:34.754587    9360 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1028 05:06:34.754591    9360 cache.go:56] Caching tarball of preloaded images
	I1028 05:06:34.754663    9360 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:06:34.754669    9360 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1028 05:06:34.754718    9360 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/running-upgrade-581000/config.json ...
	I1028 05:06:34.755163    9360 start.go:360] acquireMachinesLock for running-upgrade-581000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:06:34.755212    9360 start.go:364] duration metric: took 43.209µs to acquireMachinesLock for "running-upgrade-581000"
	I1028 05:06:34.755221    9360 start.go:96] Skipping create...Using existing machine configuration
	I1028 05:06:34.755226    9360 fix.go:54] fixHost starting: 
	I1028 05:06:34.755934    9360 fix.go:112] recreateIfNeeded on running-upgrade-581000: state=Running err=<nil>
	W1028 05:06:34.755943    9360 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 05:06:34.759697    9360 out.go:177] * Updating the running qemu2 "running-upgrade-581000" VM ...
	I1028 05:06:34.767574    9360 machine.go:93] provisionDockerMachine start ...
	I1028 05:06:34.767630    9360 main.go:141] libmachine: Using SSH client type: native
	I1028 05:06:34.767738    9360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030fa5f0] 0x1030fce30 <nil>  [] 0s} localhost 57998 <nil> <nil>}
	I1028 05:06:34.767744    9360 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 05:06:34.827696    9360 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-581000
	
	I1028 05:06:34.827709    9360 buildroot.go:166] provisioning hostname "running-upgrade-581000"
	I1028 05:06:34.827789    9360 main.go:141] libmachine: Using SSH client type: native
	I1028 05:06:34.827893    9360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030fa5f0] 0x1030fce30 <nil>  [] 0s} localhost 57998 <nil> <nil>}
	I1028 05:06:34.827901    9360 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-581000 && echo "running-upgrade-581000" | sudo tee /etc/hostname
	I1028 05:06:34.891174    9360 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-581000
	
	I1028 05:06:34.891235    9360 main.go:141] libmachine: Using SSH client type: native
	I1028 05:06:34.891337    9360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030fa5f0] 0x1030fce30 <nil>  [] 0s} localhost 57998 <nil> <nil>}
	I1028 05:06:34.891345    9360 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-581000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-581000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-581000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 05:06:34.955111    9360 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 05:06:34.955125    9360 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19875-6942/.minikube CaCertPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19875-6942/.minikube}
	I1028 05:06:34.955133    9360 buildroot.go:174] setting up certificates
	I1028 05:06:34.955138    9360 provision.go:84] configureAuth start
	I1028 05:06:34.955146    9360 provision.go:143] copyHostCerts
	I1028 05:06:34.955213    9360 exec_runner.go:144] found /Users/jenkins/minikube-integration/19875-6942/.minikube/ca.pem, removing ...
	I1028 05:06:34.955227    9360 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19875-6942/.minikube/ca.pem
	I1028 05:06:34.955367    9360 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19875-6942/.minikube/ca.pem (1082 bytes)
	I1028 05:06:34.955551    9360 exec_runner.go:144] found /Users/jenkins/minikube-integration/19875-6942/.minikube/cert.pem, removing ...
	I1028 05:06:34.955556    9360 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19875-6942/.minikube/cert.pem
	I1028 05:06:34.955644    9360 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19875-6942/.minikube/cert.pem (1123 bytes)
	I1028 05:06:34.955769    9360 exec_runner.go:144] found /Users/jenkins/minikube-integration/19875-6942/.minikube/key.pem, removing ...
	I1028 05:06:34.955774    9360 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19875-6942/.minikube/key.pem
	I1028 05:06:34.955830    9360 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19875-6942/.minikube/key.pem (1675 bytes)
	I1028 05:06:34.955927    9360 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-581000 san=[127.0.0.1 localhost minikube running-upgrade-581000]
	I1028 05:06:35.007220    9360 provision.go:177] copyRemoteCerts
	I1028 05:06:35.007267    9360 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 05:06:35.007274    9360 sshutil.go:53] new ssh client: &{IP:localhost Port:57998 SSHKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/running-upgrade-581000/id_rsa Username:docker}
	I1028 05:06:35.042168    9360 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 05:06:35.048579    9360 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1028 05:06:35.055425    9360 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 05:06:35.062393    9360 provision.go:87] duration metric: took 107.249083ms to configureAuth
	I1028 05:06:35.062402    9360 buildroot.go:189] setting minikube options for container-runtime
	I1028 05:06:35.062508    9360 config.go:182] Loaded profile config "running-upgrade-581000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 05:06:35.062553    9360 main.go:141] libmachine: Using SSH client type: native
	I1028 05:06:35.062646    9360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030fa5f0] 0x1030fce30 <nil>  [] 0s} localhost 57998 <nil> <nil>}
	I1028 05:06:35.062650    9360 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1028 05:06:35.124752    9360 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1028 05:06:35.124760    9360 buildroot.go:70] root file system type: tmpfs
	I1028 05:06:35.124807    9360 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1028 05:06:35.124866    9360 main.go:141] libmachine: Using SSH client type: native
	I1028 05:06:35.124995    9360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030fa5f0] 0x1030fce30 <nil>  [] 0s} localhost 57998 <nil> <nil>}
	I1028 05:06:35.125027    9360 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1028 05:06:35.190229    9360 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1028 05:06:35.190287    9360 main.go:141] libmachine: Using SSH client type: native
	I1028 05:06:35.190386    9360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030fa5f0] 0x1030fce30 <nil>  [] 0s} localhost 57998 <nil> <nil>}
	I1028 05:06:35.190394    9360 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1028 05:06:35.251759    9360 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 05:06:35.251791    9360 machine.go:96] duration metric: took 484.221333ms to provisionDockerMachine
	I1028 05:06:35.251797    9360 start.go:293] postStartSetup for "running-upgrade-581000" (driver="qemu2")
	I1028 05:06:35.251802    9360 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 05:06:35.251863    9360 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 05:06:35.251871    9360 sshutil.go:53] new ssh client: &{IP:localhost Port:57998 SSHKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/running-upgrade-581000/id_rsa Username:docker}
	I1028 05:06:35.283958    9360 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 05:06:35.285391    9360 info.go:137] Remote host: Buildroot 2021.02.12
	I1028 05:06:35.285399    9360 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19875-6942/.minikube/addons for local assets ...
	I1028 05:06:35.285489    9360 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19875-6942/.minikube/files for local assets ...
	I1028 05:06:35.285626    9360 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19875-6942/.minikube/files/etc/ssl/certs/74522.pem -> 74522.pem in /etc/ssl/certs
	I1028 05:06:35.285782    9360 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 05:06:35.288497    9360 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/files/etc/ssl/certs/74522.pem --> /etc/ssl/certs/74522.pem (1708 bytes)
	I1028 05:06:35.295416    9360 start.go:296] duration metric: took 43.614667ms for postStartSetup
	I1028 05:06:35.295428    9360 fix.go:56] duration metric: took 540.215959ms for fixHost
	I1028 05:06:35.295495    9360 main.go:141] libmachine: Using SSH client type: native
	I1028 05:06:35.295600    9360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030fa5f0] 0x1030fce30 <nil>  [] 0s} localhost 57998 <nil> <nil>}
	I1028 05:06:35.295604    9360 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 05:06:35.356967    9360 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117195.239136680
	
	I1028 05:06:35.356977    9360 fix.go:216] guest clock: 1730117195.239136680
	I1028 05:06:35.356981    9360 fix.go:229] Guest: 2024-10-28 05:06:35.23913668 -0700 PDT Remote: 2024-10-28 05:06:35.29543 -0700 PDT m=+0.649015126 (delta=-56.29332ms)
	I1028 05:06:35.356993    9360 fix.go:200] guest clock delta is within tolerance: -56.29332ms
	I1028 05:06:35.356996    9360 start.go:83] releasing machines lock for "running-upgrade-581000", held for 601.792791ms
	I1028 05:06:35.357075    9360 ssh_runner.go:195] Run: cat /version.json
	I1028 05:06:35.357086    9360 sshutil.go:53] new ssh client: &{IP:localhost Port:57998 SSHKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/running-upgrade-581000/id_rsa Username:docker}
	I1028 05:06:35.357076    9360 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 05:06:35.357111    9360 sshutil.go:53] new ssh client: &{IP:localhost Port:57998 SSHKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/running-upgrade-581000/id_rsa Username:docker}
	W1028 05:06:35.357614    9360 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:58136->127.0.0.1:57998: read: connection reset by peer
	I1028 05:06:35.357632    9360 retry.go:31] will retry after 234.676259ms: ssh: handshake failed: read tcp 127.0.0.1:58136->127.0.0.1:57998: read: connection reset by peer
	W1028 05:06:35.387900    9360 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1028 05:06:35.387940    9360 ssh_runner.go:195] Run: systemctl --version
	I1028 05:06:35.389883    9360 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 05:06:35.391464    9360 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 05:06:35.391492    9360 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1028 05:06:35.394642    9360 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1028 05:06:35.399113    9360 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 05:06:35.399120    9360 start.go:495] detecting cgroup driver to use...
	I1028 05:06:35.399233    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 05:06:35.404696    9360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1028 05:06:35.407684    9360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1028 05:06:35.411260    9360 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1028 05:06:35.411293    9360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1028 05:06:35.414605    9360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 05:06:35.417849    9360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1028 05:06:35.420790    9360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 05:06:35.426188    9360 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 05:06:35.429220    9360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1028 05:06:35.432082    9360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1028 05:06:35.435544    9360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1028 05:06:35.439019    9360 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 05:06:35.442508    9360 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 05:06:35.445630    9360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 05:06:35.537481    9360 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1028 05:06:35.548611    9360 start.go:495] detecting cgroup driver to use...
	I1028 05:06:35.548721    9360 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1028 05:06:35.554132    9360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 05:06:35.559106    9360 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 05:06:35.566527    9360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 05:06:35.571463    9360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 05:06:35.576581    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 05:06:35.581668    9360 ssh_runner.go:195] Run: which cri-dockerd
	I1028 05:06:35.582888    9360 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1028 05:06:35.585834    9360 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1028 05:06:35.590884    9360 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1028 05:06:35.677308    9360 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1028 05:06:35.780409    9360 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1028 05:06:35.780467    9360 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1028 05:06:35.786602    9360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 05:06:35.881291    9360 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 05:06:38.768139    9360 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.886895666s)
	I1028 05:06:38.768229    9360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1028 05:06:38.773227    9360 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1028 05:06:38.779504    9360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 05:06:38.784641    9360 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1028 05:06:38.836090    9360 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1028 05:06:38.917850    9360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 05:06:39.003922    9360 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1028 05:06:39.009947    9360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 05:06:39.014408    9360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 05:06:39.110835    9360 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1028 05:06:39.149594    9360 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1028 05:06:39.149687    9360 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1028 05:06:39.151729    9360 start.go:563] Will wait 60s for crictl version
	I1028 05:06:39.151779    9360 ssh_runner.go:195] Run: which crictl
	I1028 05:06:39.153343    9360 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 05:06:39.165314    9360 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1028 05:06:39.165389    9360 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 05:06:39.177664    9360 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 05:06:39.193577    9360 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1028 05:06:39.193686    9360 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1028 05:06:39.195071    9360 kubeadm.go:883] updating cluster {Name:running-upgrade-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:58030 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1028 05:06:39.195114    9360 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1028 05:06:39.195155    9360 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 05:06:39.207186    9360 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1028 05:06:39.207193    9360 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1028 05:06:39.207242    9360 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1028 05:06:39.210854    9360 ssh_runner.go:195] Run: which lz4
	I1028 05:06:39.212131    9360 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 05:06:39.213470    9360 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 05:06:39.213486    9360 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1028 05:06:40.192249    9360 docker.go:653] duration metric: took 980.184542ms to copy over tarball
	I1028 05:06:40.192330    9360 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 05:06:41.398016    9360 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.20569725s)
	I1028 05:06:41.398030    9360 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 05:06:41.414261    9360 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1028 05:06:41.417870    9360 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1028 05:06:41.423472    9360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 05:06:41.498064    9360 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 05:06:42.673428    9360 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.175375042s)
	I1028 05:06:42.673524    9360 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 05:06:42.684665    9360 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1028 05:06:42.684679    9360 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1028 05:06:42.684684    9360 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 05:06:42.690382    9360 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 05:06:42.692706    9360 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1028 05:06:42.694629    9360 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1028 05:06:42.694663    9360 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 05:06:42.696798    9360 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 05:06:42.696843    9360 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1028 05:06:42.698304    9360 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1028 05:06:42.698403    9360 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1028 05:06:42.699118    9360 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1028 05:06:42.699345    9360 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 05:06:42.700665    9360 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 05:06:42.700843    9360 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1028 05:06:42.701688    9360 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1028 05:06:42.701790    9360 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1028 05:06:42.702800    9360 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 05:06:42.703577    9360 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1028 05:06:43.230725    9360 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1028 05:06:43.233765    9360 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1028 05:06:43.238477    9360 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 05:06:43.254354    9360 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1028 05:06:43.254392    9360 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1028 05:06:43.254450    9360 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1028 05:06:43.263615    9360 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1028 05:06:43.263920    9360 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1028 05:06:43.263937    9360 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 05:06:43.264019    9360 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 05:06:43.264029    9360 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1028 05:06:43.264183    9360 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1028 05:06:43.277239    9360 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1028 05:06:43.289882    9360 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1028 05:06:43.289916    9360 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1028 05:06:43.302349    9360 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1028 05:06:43.313039    9360 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1028 05:06:43.313062    9360 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1028 05:06:43.313125    9360 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1028 05:06:43.321936    9360 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1028 05:06:43.323109    9360 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1028 05:06:43.332525    9360 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1028 05:06:43.332546    9360 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1028 05:06:43.332607    9360 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1028 05:06:43.342774    9360 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1028 05:06:43.342905    9360 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1028 05:06:43.344692    9360 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1028 05:06:43.344703    9360 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1028 05:06:43.352733    9360 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1028 05:06:43.352740    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1028 05:06:43.382211    9360 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1028 05:06:43.406763    9360 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1028 05:06:43.406917    9360 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1028 05:06:43.416976    9360 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1028 05:06:43.417001    9360 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 05:06:43.417101    9360 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1028 05:06:43.426200    9360 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1028 05:06:43.426341    9360 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1028 05:06:43.427837    9360 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1028 05:06:43.427849    9360 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1028 05:06:43.482003    9360 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1028 05:06:43.487128    9360 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1028 05:06:43.487145    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1028 05:06:43.501370    9360 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1028 05:06:43.501399    9360 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1028 05:06:43.501462    9360 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	W1028 05:06:43.540549    9360 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1028 05:06:43.540667    9360 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 05:06:43.581786    9360 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1028 05:06:43.581811    9360 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1028 05:06:43.581845    9360 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1028 05:06:43.581863    9360 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 05:06:43.581928    9360 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 05:06:43.628858    9360 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1028 05:06:43.629006    9360 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1028 05:06:43.631028    9360 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1028 05:06:43.631044    9360 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1028 05:06:43.662724    9360 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1028 05:06:43.662739    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1028 05:06:43.898383    9360 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1028 05:06:43.898424    9360 cache_images.go:92] duration metric: took 1.213759709s to LoadCachedImages
	W1028 05:06:43.898466    9360 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1028 05:06:43.898474    9360 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1028 05:06:43.898533    9360 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-581000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 05:06:43.898607    9360 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1028 05:06:43.913490    9360 cni.go:84] Creating CNI manager for ""
	I1028 05:06:43.913501    9360 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 05:06:43.913511    9360 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 05:06:43.913520    9360 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-581000 NodeName:running-upgrade-581000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 05:06:43.913582    9360 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-581000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 05:06:43.913652    9360 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1028 05:06:43.916491    9360 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 05:06:43.916531    9360 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 05:06:43.918988    9360 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1028 05:06:43.924545    9360 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 05:06:43.929292    9360 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1028 05:06:43.934584    9360 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1028 05:06:43.935898    9360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 05:06:44.022070    9360 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 05:06:44.027293    9360 certs.go:68] Setting up /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/running-upgrade-581000 for IP: 10.0.2.15
	I1028 05:06:44.027311    9360 certs.go:194] generating shared ca certs ...
	I1028 05:06:44.027320    9360 certs.go:226] acquiring lock for ca certs: {Name:mk596dd32716491232c9389abcfad3254ffdbfdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:06:44.027570    9360 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/ca.key
	I1028 05:06:44.027608    9360 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/proxy-client-ca.key
	I1028 05:06:44.027615    9360 certs.go:256] generating profile certs ...
	I1028 05:06:44.027671    9360 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/running-upgrade-581000/client.key
	I1028 05:06:44.027682    9360 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/running-upgrade-581000/apiserver.key.23ea3028
	I1028 05:06:44.027695    9360 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/running-upgrade-581000/apiserver.crt.23ea3028 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1028 05:06:44.225461    9360 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/running-upgrade-581000/apiserver.crt.23ea3028 ...
	I1028 05:06:44.225487    9360 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/running-upgrade-581000/apiserver.crt.23ea3028: {Name:mk2e191d76a8fcd343efa14bb08437795ec7ff91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:06:44.225817    9360 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/running-upgrade-581000/apiserver.key.23ea3028 ...
	I1028 05:06:44.225821    9360 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/running-upgrade-581000/apiserver.key.23ea3028: {Name:mk558b9496a4c73332d3532352007094d66351a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:06:44.225983    9360 certs.go:381] copying /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/running-upgrade-581000/apiserver.crt.23ea3028 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/running-upgrade-581000/apiserver.crt
	I1028 05:06:44.226114    9360 certs.go:385] copying /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/running-upgrade-581000/apiserver.key.23ea3028 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/running-upgrade-581000/apiserver.key
	I1028 05:06:44.226252    9360 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/running-upgrade-581000/proxy-client.key
	I1028 05:06:44.226394    9360 certs.go:484] found cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/7452.pem (1338 bytes)
	W1028 05:06:44.226417    9360 certs.go:480] ignoring /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/7452_empty.pem, impossibly tiny 0 bytes
	I1028 05:06:44.226422    9360 certs.go:484] found cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 05:06:44.226441    9360 certs.go:484] found cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem (1082 bytes)
	I1028 05:06:44.226465    9360 certs.go:484] found cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem (1123 bytes)
	I1028 05:06:44.226484    9360 certs.go:484] found cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/key.pem (1675 bytes)
	I1028 05:06:44.226522    9360 certs.go:484] found cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/files/etc/ssl/certs/74522.pem (1708 bytes)
	I1028 05:06:44.226981    9360 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 05:06:44.234872    9360 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 05:06:44.242733    9360 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 05:06:44.250126    9360 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 05:06:44.257071    9360 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/running-upgrade-581000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 05:06:44.263739    9360 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/running-upgrade-581000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 05:06:44.270817    9360 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/running-upgrade-581000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 05:06:44.277871    9360 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/running-upgrade-581000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 05:06:44.284792    9360 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 05:06:44.291321    9360 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	I1028 05:06:44.298871    9360 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/files/etc/ssl/certs/74522.pem --> /usr/share/ca-certificates/74522.pem (1708 bytes)
	I1028 05:06:44.306337    9360 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 05:06:44.311853    9360 ssh_runner.go:195] Run: openssl version
	I1028 05:06:44.313875    9360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 05:06:44.316894    9360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 05:06:44.318492    9360 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 12:06 /usr/share/ca-certificates/minikubeCA.pem
	I1028 05:06:44.318519    9360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 05:06:44.320434    9360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 05:06:44.323377    9360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	I1028 05:06:44.326740    9360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7452.pem
	I1028 05:06:44.328264    9360 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:54 /usr/share/ca-certificates/7452.pem
	I1028 05:06:44.328289    9360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	I1028 05:06:44.330138    9360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	I1028 05:06:44.332941    9360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/74522.pem && ln -fs /usr/share/ca-certificates/74522.pem /etc/ssl/certs/74522.pem"
	I1028 05:06:44.335953    9360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/74522.pem
	I1028 05:06:44.337448    9360 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:54 /usr/share/ca-certificates/74522.pem
	I1028 05:06:44.337474    9360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/74522.pem
	I1028 05:06:44.339234    9360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/74522.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 05:06:44.342408    9360 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 05:06:44.343853    9360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 05:06:44.345698    9360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 05:06:44.347447    9360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 05:06:44.349276    9360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 05:06:44.351258    9360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 05:06:44.353149    9360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 05:06:44.354765    9360 kubeadm.go:392] StartCluster: {Name:running-upgrade-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:58030 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1028 05:06:44.354849    9360 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1028 05:06:44.365033    9360 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 05:06:44.368173    9360 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 05:06:44.368183    9360 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 05:06:44.368222    9360 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 05:06:44.371968    9360 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 05:06:44.371999    9360 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-581000" does not appear in /Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:06:44.372013    9360 kubeconfig.go:62] /Users/jenkins/minikube-integration/19875-6942/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-581000" cluster setting kubeconfig missing "running-upgrade-581000" context setting]
	I1028 05:06:44.372208    9360 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/kubeconfig: {Name:mk90a124f6c448e81120cf90ba82d6374e9cd851 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:06:44.372893    9360 kapi.go:59] client config for running-upgrade-581000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/running-upgrade-581000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/running-upgrade-581000/client.key", CAFile:"/Users/jenkins/minikube-integration/19875-6942/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104b56680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 05:06:44.373896    9360 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 05:06:44.376719    9360 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-581000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1028 05:06:44.376725    9360 kubeadm.go:1160] stopping kube-system containers ...
	I1028 05:06:44.376776    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1028 05:06:44.387864    9360 docker.go:483] Stopping containers: [5d4427ca06c2 d7b1aeec235c 0a6b1ab5d1d8 4a5a90d564ff abddc63aa6fa f66c957c1d88 541001035c33 4d3a1d76014f c92c29897cfe add71d73a2be 0589805c2cad 0e3add9af1ea d0b3ea1bd27e e3347149815e 2308d7088716]
	I1028 05:06:44.387941    9360 ssh_runner.go:195] Run: docker stop 5d4427ca06c2 d7b1aeec235c 0a6b1ab5d1d8 4a5a90d564ff abddc63aa6fa f66c957c1d88 541001035c33 4d3a1d76014f c92c29897cfe add71d73a2be 0589805c2cad 0e3add9af1ea d0b3ea1bd27e e3347149815e 2308d7088716
	I1028 05:06:44.445780    9360 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 05:06:44.536609    9360 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 05:06:44.540385    9360 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Oct 28 12:06 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Oct 28 12:06 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Oct 28 12:06 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Oct 28 12:06 /etc/kubernetes/scheduler.conf
	
	I1028 05:06:44.540424    9360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58030 /etc/kubernetes/admin.conf
	I1028 05:06:44.543379    9360 kubeadm.go:163] "https://control-plane.minikube.internal:58030" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:58030 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1028 05:06:44.543427    9360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 05:06:44.546653    9360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58030 /etc/kubernetes/kubelet.conf
	I1028 05:06:44.549558    9360 kubeadm.go:163] "https://control-plane.minikube.internal:58030" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:58030 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1028 05:06:44.549591    9360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 05:06:44.552076    9360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58030 /etc/kubernetes/controller-manager.conf
	I1028 05:06:44.554836    9360 kubeadm.go:163] "https://control-plane.minikube.internal:58030" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:58030 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1028 05:06:44.554863    9360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 05:06:44.557782    9360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58030 /etc/kubernetes/scheduler.conf
	I1028 05:06:44.560551    9360 kubeadm.go:163] "https://control-plane.minikube.internal:58030" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:58030 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1028 05:06:44.560583    9360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 05:06:44.563296    9360 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 05:06:44.566572    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 05:06:44.589569    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 05:06:45.086198    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 05:06:45.286447    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 05:06:45.311187    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 05:06:45.333682    9360 api_server.go:52] waiting for apiserver process to appear ...
	I1028 05:06:45.333778    9360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 05:06:45.835905    9360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 05:06:46.335840    9360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 05:06:46.344571    9360 api_server.go:72] duration metric: took 1.010910375s to wait for apiserver process to appear ...
	I1028 05:06:46.344587    9360 api_server.go:88] waiting for apiserver healthz status ...
	I1028 05:06:46.344615    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:06:51.346682    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:06:51.346774    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:06:56.347933    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:06:56.348017    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:07:01.349441    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:07:01.349493    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:07:06.350509    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:07:06.350547    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:07:11.351451    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:07:11.351546    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:07:16.353800    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:07:16.353901    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:07:21.356550    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:07:21.356650    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:07:26.359382    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:07:26.359487    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:07:31.360976    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:07:31.361084    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:07:36.363774    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:07:36.363869    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:07:41.366504    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:07:41.366605    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:07:46.369296    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:07:46.369833    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:07:46.408675    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:07:46.408843    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:07:46.434801    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:07:46.434929    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:07:46.448635    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:07:46.448725    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:07:46.460787    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:07:46.460872    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:07:46.471477    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:07:46.471556    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:07:46.487104    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:07:46.487193    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:07:46.498171    9360 logs.go:282] 0 containers: []
	W1028 05:07:46.498181    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:07:46.498247    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:07:46.508405    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:07:46.508422    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:07:46.508428    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:07:46.581680    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:07:46.581694    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:07:46.593499    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:07:46.593512    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:07:46.619707    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:07:46.619715    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:07:46.636163    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:07:46.636171    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:07:46.655949    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:07:46.655960    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:07:46.667014    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:07:46.667030    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:07:46.689124    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:07:46.689137    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:07:46.703130    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:07:46.703139    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:07:46.714692    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:07:46.714703    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:07:46.752223    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:07:46.752229    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:07:46.757998    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:07:46.758011    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:07:46.772380    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:07:46.772390    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:07:46.784203    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:07:46.784215    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:07:46.797907    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:07:46.797922    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:07:46.816559    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:07:46.816572    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:07:46.828294    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:07:46.828303    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:07:49.342175    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:07:54.343180    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:07:54.343815    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:07:54.384203    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:07:54.384371    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:07:54.405910    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:07:54.406036    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:07:54.421502    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:07:54.421586    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:07:54.434197    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:07:54.434281    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:07:54.445282    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:07:54.445358    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:07:54.456514    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:07:54.456584    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:07:54.467034    9360 logs.go:282] 0 containers: []
	W1028 05:07:54.467049    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:07:54.467115    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:07:54.477382    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:07:54.477403    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:07:54.477409    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:07:54.513939    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:07:54.513948    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:07:54.525431    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:07:54.525440    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:07:54.543340    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:07:54.543353    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:07:54.555295    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:07:54.555307    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:07:54.580175    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:07:54.580184    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:07:54.596407    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:07:54.596419    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:07:54.607656    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:07:54.607668    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:07:54.619499    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:07:54.619512    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:07:54.623855    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:07:54.623864    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:07:54.658693    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:07:54.658709    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:07:54.676062    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:07:54.676073    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:07:54.687657    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:07:54.687671    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:07:54.707377    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:07:54.707387    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:07:54.721163    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:07:54.721172    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:07:54.733323    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:07:54.733334    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:07:54.748284    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:07:54.748294    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:07:57.261947    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:08:02.264459    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:08:02.265083    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:08:02.305418    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:08:02.305570    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:08:02.331952    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:08:02.332081    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:08:02.346549    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:08:02.346641    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:08:02.358359    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:08:02.358441    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:08:02.368986    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:08:02.369062    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:08:02.384139    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:08:02.384217    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:08:02.394383    9360 logs.go:282] 0 containers: []
	W1028 05:08:02.394395    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:08:02.394456    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:08:02.405296    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:08:02.405313    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:08:02.405319    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:08:02.422545    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:08:02.422555    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:08:02.434561    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:08:02.434572    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:08:02.446354    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:08:02.446366    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:08:02.485554    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:08:02.485564    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:08:02.497630    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:08:02.497641    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:08:02.509606    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:08:02.509616    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:08:02.521296    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:08:02.521307    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:08:02.526206    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:08:02.526215    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:08:02.562439    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:08:02.562453    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:08:02.574429    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:08:02.574441    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:08:02.591982    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:08:02.591992    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:08:02.604076    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:08:02.604087    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:08:02.630167    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:08:02.630176    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:08:02.644525    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:08:02.644535    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:08:02.663672    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:08:02.663682    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:08:02.680463    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:08:02.680476    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:08:05.196150    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:08:10.198410    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:08:10.198947    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:08:10.243990    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:08:10.244104    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:08:10.259994    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:08:10.260085    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:08:10.273202    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:08:10.273270    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:08:10.283932    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:08:10.284013    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:08:10.294864    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:08:10.294941    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:08:10.305827    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:08:10.305909    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:08:10.316318    9360 logs.go:282] 0 containers: []
	W1028 05:08:10.316329    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:08:10.316391    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:08:10.326831    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:08:10.326849    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:08:10.326854    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:08:10.346164    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:08:10.346173    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:08:10.360162    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:08:10.360173    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:08:10.380269    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:08:10.380281    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:08:10.391915    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:08:10.391926    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:08:10.403143    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:08:10.403158    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:08:10.407447    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:08:10.407455    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:08:10.442228    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:08:10.442240    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:08:10.456681    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:08:10.456692    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:08:10.468511    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:08:10.468525    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:08:10.486354    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:08:10.486367    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:08:10.498233    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:08:10.498242    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:08:10.510903    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:08:10.510941    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:08:10.522565    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:08:10.522576    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:08:10.548269    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:08:10.548278    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:08:10.585454    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:08:10.585463    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:08:10.597178    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:08:10.597188    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:08:13.116227    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:08:18.116989    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:08:18.117595    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:08:18.161512    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:08:18.161664    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:08:18.184525    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:08:18.184641    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:08:18.202606    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:08:18.202700    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:08:18.213825    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:08:18.213899    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:08:18.224290    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:08:18.224359    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:08:18.235166    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:08:18.235235    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:08:18.245301    9360 logs.go:282] 0 containers: []
	W1028 05:08:18.245313    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:08:18.245380    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:08:18.257129    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:08:18.257152    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:08:18.257157    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:08:18.271429    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:08:18.271442    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:08:18.285833    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:08:18.285844    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:08:18.298792    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:08:18.298802    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:08:18.318271    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:08:18.318282    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:08:18.332530    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:08:18.332542    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:08:18.343979    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:08:18.343992    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:08:18.348579    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:08:18.348589    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:08:18.369578    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:08:18.369587    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:08:18.384790    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:08:18.384803    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:08:18.418953    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:08:18.418967    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:08:18.444007    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:08:18.444018    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:08:18.455908    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:08:18.455918    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:08:18.467668    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:08:18.467683    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:08:18.484690    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:08:18.484701    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:08:18.520971    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:08:18.520980    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:08:18.538995    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:08:18.539005    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:08:21.052368    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:08:26.055156    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:08:26.055801    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:08:26.095986    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:08:26.096130    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:08:26.121377    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:08:26.121536    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:08:26.135617    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:08:26.135702    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:08:26.153362    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:08:26.153450    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:08:26.163858    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:08:26.163927    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:08:26.176022    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:08:26.176106    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:08:26.186395    9360 logs.go:282] 0 containers: []
	W1028 05:08:26.186406    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:08:26.186477    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:08:26.196933    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:08:26.196950    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:08:26.196955    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:08:26.219721    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:08:26.219730    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:08:26.232503    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:08:26.232514    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:08:26.244536    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:08:26.244548    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:08:26.256530    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:08:26.256541    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:08:26.268406    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:08:26.268417    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:08:26.280951    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:08:26.280962    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:08:26.305583    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:08:26.305590    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:08:26.341620    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:08:26.341627    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:08:26.355766    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:08:26.355785    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:08:26.369893    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:08:26.369901    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:08:26.382653    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:08:26.382664    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:08:26.402384    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:08:26.402395    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:08:26.413963    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:08:26.413974    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:08:26.418295    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:08:26.418304    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:08:26.453337    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:08:26.453347    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:08:26.468603    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:08:26.468614    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:08:28.990440    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:08:33.992822    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:08:33.993318    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:08:34.031391    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:08:34.031536    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:08:34.051278    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:08:34.051383    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:08:34.068137    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:08:34.068222    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:08:34.079892    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:08:34.079976    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:08:34.090829    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:08:34.090894    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:08:34.102035    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:08:34.102108    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:08:34.112622    9360 logs.go:282] 0 containers: []
	W1028 05:08:34.112634    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:08:34.112701    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:08:34.123308    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:08:34.123325    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:08:34.123330    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:08:34.137189    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:08:34.137202    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:08:34.151612    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:08:34.151625    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:08:34.168089    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:08:34.168101    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:08:34.185574    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:08:34.185584    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:08:34.197866    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:08:34.197880    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:08:34.211059    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:08:34.211069    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:08:34.246865    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:08:34.246871    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:08:34.260769    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:08:34.260778    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:08:34.278773    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:08:34.278784    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:08:34.290004    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:08:34.290013    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:08:34.314081    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:08:34.314088    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:08:34.326154    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:08:34.326162    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:08:34.344642    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:08:34.344657    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:08:34.363558    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:08:34.363570    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:08:34.377155    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:08:34.377168    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:08:34.381483    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:08:34.381490    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:08:36.918768    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:08:41.921443    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:08:41.922045    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:08:41.961727    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:08:41.961893    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:08:41.984492    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:08:41.984620    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:08:42.000241    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:08:42.000330    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:08:42.012440    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:08:42.012520    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:08:42.023152    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:08:42.023229    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:08:42.034437    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:08:42.034518    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:08:42.045789    9360 logs.go:282] 0 containers: []
	W1028 05:08:42.045804    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:08:42.045869    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:08:42.056683    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:08:42.056705    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:08:42.056710    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:08:42.076491    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:08:42.076503    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:08:42.092163    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:08:42.092175    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:08:42.116355    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:08:42.116364    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:08:42.128342    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:08:42.128356    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:08:42.164366    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:08:42.164381    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:08:42.183436    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:08:42.183449    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:08:42.187854    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:08:42.187860    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:08:42.205194    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:08:42.205204    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:08:42.216673    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:08:42.216685    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:08:42.228108    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:08:42.228117    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:08:42.242055    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:08:42.242065    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:08:42.256539    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:08:42.256551    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:08:42.274283    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:08:42.274295    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:08:42.285683    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:08:42.285696    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:08:42.297089    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:08:42.297101    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:08:42.308538    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:08:42.308552    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:08:44.846969    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:08:49.848468    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:08:49.848669    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:08:49.860709    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:08:49.860797    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:08:49.872002    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:08:49.872078    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:08:49.882306    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:08:49.882379    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:08:49.892943    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:08:49.893018    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:08:49.903463    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:08:49.903538    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:08:49.913766    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:08:49.913839    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:08:49.926262    9360 logs.go:282] 0 containers: []
	W1028 05:08:49.926280    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:08:49.926353    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:08:49.936799    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:08:49.936823    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:08:49.936828    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:08:49.941857    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:08:49.941863    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:08:49.959096    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:08:49.959107    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:08:49.970965    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:08:49.970977    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:08:50.009330    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:08:50.009343    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:08:50.028253    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:08:50.028268    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:08:50.040019    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:08:50.040029    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:08:50.054913    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:08:50.054923    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:08:50.066689    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:08:50.066699    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:08:50.078699    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:08:50.078710    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:08:50.092734    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:08:50.092744    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:08:50.104120    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:08:50.104130    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:08:50.115151    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:08:50.115162    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:08:50.151388    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:08:50.151398    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:08:50.170271    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:08:50.170281    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:08:50.181510    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:08:50.181520    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:08:50.206880    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:08:50.206887    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:08:52.722629    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:08:57.725280    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:08:57.725956    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:08:57.764201    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:08:57.764401    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:08:57.791977    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:08:57.792073    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:08:57.808655    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:08:57.808736    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:08:57.820468    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:08:57.820550    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:08:57.831035    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:08:57.831112    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:08:57.842112    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:08:57.842189    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:08:57.853025    9360 logs.go:282] 0 containers: []
	W1028 05:08:57.853035    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:08:57.853101    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:08:57.864449    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:08:57.864464    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:08:57.864469    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:08:57.875876    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:08:57.875889    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:08:57.895679    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:08:57.895692    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:08:57.907611    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:08:57.907623    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:08:57.919576    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:08:57.919588    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:08:57.931090    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:08:57.931099    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:08:57.954972    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:08:57.954977    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:08:57.990383    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:08:57.990397    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:08:58.011794    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:08:58.011804    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:08:58.030056    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:08:58.030065    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:08:58.042458    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:08:58.042470    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:08:58.078653    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:08:58.078663    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:08:58.089927    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:08:58.089939    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:08:58.094521    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:08:58.094529    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:08:58.108855    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:08:58.108866    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:08:58.123552    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:08:58.123561    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:08:58.140818    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:08:58.140830    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:09:00.652719    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:09:05.654976    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:09:05.655568    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:09:05.695928    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:09:05.696087    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:09:05.718136    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:09:05.718273    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:09:05.738016    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:09:05.738103    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:09:05.752498    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:09:05.752584    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:09:05.765276    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:09:05.765354    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:09:05.776191    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:09:05.776269    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:09:05.786775    9360 logs.go:282] 0 containers: []
	W1028 05:09:05.786789    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:09:05.786858    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:09:05.797581    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:09:05.797599    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:09:05.797605    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:09:05.802560    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:09:05.802568    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:09:05.816406    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:09:05.816417    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:09:05.834949    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:09:05.834962    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:09:05.851239    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:09:05.851251    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:09:05.866269    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:09:05.866281    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:09:05.877589    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:09:05.877604    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:09:05.889393    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:09:05.889406    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:09:05.928805    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:09:05.928817    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:09:05.949858    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:09:05.949869    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:09:05.961830    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:09:05.961841    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:09:05.973682    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:09:05.973695    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:09:05.991150    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:09:05.991160    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:09:06.002572    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:09:06.002585    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:09:06.026023    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:09:06.026029    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:09:06.061487    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:09:06.061501    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:09:06.080648    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:09:06.080658    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:09:08.594188    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:09:13.596367    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:09:13.596510    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:09:13.609939    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:09:13.610030    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:09:13.623389    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:09:13.623477    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:09:13.634387    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:09:13.634470    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:09:13.645482    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:09:13.645557    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:09:13.656705    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:09:13.656787    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:09:13.668116    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:09:13.668190    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:09:13.680779    9360 logs.go:282] 0 containers: []
	W1028 05:09:13.680790    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:09:13.680862    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:09:13.691908    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:09:13.691925    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:09:13.691932    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:09:13.729358    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:09:13.729381    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:09:13.745429    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:09:13.745443    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:09:13.772837    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:09:13.772858    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:09:13.791888    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:09:13.791906    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:09:13.804486    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:09:13.804502    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:09:13.818302    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:09:13.818314    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:09:13.831214    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:09:13.831228    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:09:13.836001    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:09:13.836016    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:09:13.876385    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:09:13.876397    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:09:13.893000    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:09:13.893014    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:09:13.915663    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:09:13.915684    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:09:13.931779    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:09:13.931793    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:09:13.945314    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:09:13.945326    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:09:13.963898    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:09:13.963912    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:09:13.985452    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:09:13.985466    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:09:13.999659    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:09:13.999672    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:09:16.521916    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:09:21.524131    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:09:21.524326    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:09:21.537805    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:09:21.537893    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:09:21.549949    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:09:21.550030    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:09:21.560792    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:09:21.560872    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:09:21.570984    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:09:21.571061    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:09:21.581417    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:09:21.581500    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:09:21.592012    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:09:21.592091    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:09:21.602276    9360 logs.go:282] 0 containers: []
	W1028 05:09:21.602290    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:09:21.602359    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:09:21.613642    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:09:21.613659    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:09:21.613665    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:09:21.649511    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:09:21.649525    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:09:21.661727    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:09:21.661738    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:09:21.673434    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:09:21.673444    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:09:21.684887    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:09:21.684900    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:09:21.689779    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:09:21.689786    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:09:21.709176    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:09:21.709186    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:09:21.720353    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:09:21.720365    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:09:21.735223    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:09:21.735237    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:09:21.747131    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:09:21.747145    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:09:21.784753    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:09:21.784761    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:09:21.798798    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:09:21.798813    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:09:21.816346    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:09:21.816356    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:09:21.834029    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:09:21.834042    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:09:21.857172    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:09:21.857178    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:09:21.871447    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:09:21.871458    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:09:21.886312    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:09:21.886322    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:09:24.399687    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:09:29.401784    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:09:29.401893    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:09:29.413016    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:09:29.413096    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:09:29.427656    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:09:29.427737    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:09:29.438111    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:09:29.438179    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:09:29.448401    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:09:29.448473    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:09:29.459297    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:09:29.459380    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:09:29.470037    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:09:29.470122    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:09:29.480548    9360 logs.go:282] 0 containers: []
	W1028 05:09:29.480561    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:09:29.480625    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:09:29.491527    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:09:29.491546    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:09:29.491550    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:09:29.496434    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:09:29.496440    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:09:29.514131    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:09:29.514142    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:09:29.551397    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:09:29.551406    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:09:29.562465    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:09:29.562475    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:09:29.574171    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:09:29.574182    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:09:29.586865    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:09:29.586875    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:09:29.601399    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:09:29.601410    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:09:29.616236    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:09:29.616247    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:09:29.630008    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:09:29.630018    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:09:29.645232    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:09:29.645244    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:09:29.657415    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:09:29.657425    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:09:29.677718    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:09:29.677728    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:09:29.701593    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:09:29.701601    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:09:29.738105    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:09:29.738116    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:09:29.750164    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:09:29.750174    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:09:29.761849    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:09:29.761861    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:09:32.287019    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:09:37.289531    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:09:37.289659    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:09:37.301987    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:09:37.302068    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:09:37.312987    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:09:37.313060    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:09:37.323744    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:09:37.323814    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:09:37.335039    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:09:37.335110    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:09:37.345304    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:09:37.345383    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:09:37.355902    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:09:37.355977    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:09:37.365964    9360 logs.go:282] 0 containers: []
	W1028 05:09:37.365977    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:09:37.366047    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:09:37.376650    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:09:37.376666    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:09:37.376671    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:09:37.395755    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:09:37.395765    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:09:37.412428    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:09:37.412438    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:09:37.423613    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:09:37.423623    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:09:37.435685    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:09:37.435696    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:09:37.475765    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:09:37.475786    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:09:37.511778    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:09:37.511789    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:09:37.523467    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:09:37.523479    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:09:37.534963    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:09:37.534976    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:09:37.550113    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:09:37.550126    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:09:37.568991    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:09:37.569009    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:09:37.581079    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:09:37.581090    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:09:37.592502    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:09:37.592513    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:09:37.604897    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:09:37.604907    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:09:37.629183    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:09:37.629191    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:09:37.633799    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:09:37.633809    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:09:37.651016    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:09:37.651029    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:09:40.171125    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:09:45.171427    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:09:45.171682    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:09:45.208659    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:09:45.208744    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:09:45.228139    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:09:45.228230    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:09:45.239348    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:09:45.239424    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:09:45.254439    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:09:45.254526    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:09:45.264895    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:09:45.264970    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:09:45.280199    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:09:45.280281    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:09:45.290522    9360 logs.go:282] 0 containers: []
	W1028 05:09:45.290535    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:09:45.290604    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:09:45.305270    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:09:45.305285    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:09:45.305290    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:09:45.316834    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:09:45.316845    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:09:45.352696    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:09:45.352703    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:09:45.364673    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:09:45.364686    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:09:45.383623    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:09:45.383635    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:09:45.397736    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:09:45.397748    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:09:45.414852    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:09:45.414864    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:09:45.434050    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:09:45.434064    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:09:45.445687    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:09:45.445699    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:09:45.450108    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:09:45.450116    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:09:45.486373    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:09:45.486385    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:09:45.501496    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:09:45.501506    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:09:45.513233    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:09:45.513247    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:09:45.531445    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:09:45.531455    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:09:45.542294    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:09:45.542305    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:09:45.564644    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:09:45.564651    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:09:45.587840    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:09:45.587854    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:09:48.102085    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:09:53.104898    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:09:53.105121    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:09:53.127595    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:09:53.127698    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:09:53.140344    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:09:53.140432    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:09:53.152956    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:09:53.153047    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:09:53.163592    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:09:53.163679    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:09:53.175988    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:09:53.176067    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:09:53.186292    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:09:53.186371    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:09:53.196806    9360 logs.go:282] 0 containers: []
	W1028 05:09:53.196816    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:09:53.196878    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:09:53.207690    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:09:53.207711    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:09:53.207716    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:09:53.224993    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:09:53.225004    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:09:53.236938    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:09:53.236949    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:09:53.273975    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:09:53.273987    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:09:53.288293    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:09:53.288307    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:09:53.299686    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:09:53.299700    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:09:53.311101    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:09:53.311113    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:09:53.326509    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:09:53.326520    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:09:53.364097    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:09:53.364108    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:09:53.382643    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:09:53.382654    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:09:53.398125    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:09:53.398139    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:09:53.421619    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:09:53.421629    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:09:53.433150    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:09:53.433164    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:09:53.437659    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:09:53.437667    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:09:53.451223    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:09:53.451235    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:09:53.462988    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:09:53.462998    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:09:53.474468    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:09:53.474480    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:09:55.999766    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:01.002520    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:01.002739    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:10:01.014803    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:10:01.014882    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:10:01.025932    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:10:01.026030    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:10:01.036826    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:10:01.036901    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:10:01.047920    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:10:01.048010    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:10:01.058722    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:10:01.058808    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:10:01.070054    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:10:01.070134    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:10:01.080232    9360 logs.go:282] 0 containers: []
	W1028 05:10:01.080244    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:10:01.080313    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:10:01.091233    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:10:01.091251    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:10:01.091255    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:10:01.115656    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:10:01.115663    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:10:01.129383    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:10:01.129396    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:10:01.141214    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:10:01.141224    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:10:01.158330    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:10:01.158653    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:10:01.179478    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:10:01.179495    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:10:01.197567    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:10:01.197582    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:10:01.211592    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:10:01.211605    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:10:01.223204    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:10:01.223215    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:10:01.235249    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:10:01.235258    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:10:01.246346    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:10:01.246356    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:10:01.250948    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:10:01.250955    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:10:01.286231    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:10:01.286245    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:10:01.308074    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:10:01.308091    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:10:01.320323    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:10:01.320333    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:10:01.334393    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:10:01.334404    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:10:01.347882    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:10:01.347891    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:10:03.888698    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:08.891009    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:08.891559    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:10:08.931361    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:10:08.931528    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:10:08.955962    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:10:08.956087    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:10:08.977480    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:10:08.977567    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:10:08.988770    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:10:08.988849    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:10:08.999010    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:10:08.999086    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:10:09.009851    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:10:09.009925    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:10:09.025263    9360 logs.go:282] 0 containers: []
	W1028 05:10:09.025274    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:10:09.025343    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:10:09.036001    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:10:09.036019    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:10:09.036024    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:10:09.071712    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:10:09.071725    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:10:09.086032    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:10:09.086044    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:10:09.101670    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:10:09.101681    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:10:09.112676    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:10:09.112691    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:10:09.126276    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:10:09.126285    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:10:09.164328    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:10:09.164342    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:10:09.168662    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:10:09.168668    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:10:09.182000    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:10:09.182010    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:10:09.205611    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:10:09.205619    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:10:09.217555    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:10:09.217568    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:10:09.237533    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:10:09.237545    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:10:09.254677    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:10:09.254687    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:10:09.270296    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:10:09.270309    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:10:09.287776    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:10:09.287786    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:10:09.300539    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:10:09.300552    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:10:09.312047    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:10:09.312057    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:10:11.825529    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:16.826264    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:16.826363    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:10:16.839089    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:10:16.839170    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:10:16.850785    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:10:16.850873    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:10:16.863309    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:10:16.863397    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:10:16.875384    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:10:16.875467    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:10:16.887475    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:10:16.887555    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:10:16.903382    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:10:16.903475    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:10:16.915219    9360 logs.go:282] 0 containers: []
	W1028 05:10:16.915232    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:10:16.915303    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:10:16.927094    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:10:16.927115    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:10:16.927120    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:10:16.932277    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:10:16.932289    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:10:16.973376    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:10:16.973392    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:10:16.990367    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:10:16.990380    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:10:17.008239    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:10:17.008253    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:10:17.027338    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:10:17.027350    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:10:17.043694    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:10:17.043706    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:10:17.068632    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:10:17.068647    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:10:17.094401    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:10:17.094413    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:10:17.107737    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:10:17.107749    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:10:17.122373    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:10:17.122386    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:10:17.143336    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:10:17.143353    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:10:17.162453    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:10:17.162464    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:10:17.176726    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:10:17.176738    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:10:17.189536    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:10:17.189548    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:10:17.229819    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:10:17.229836    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:10:17.242699    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:10:17.242711    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:10:19.758232    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:24.760837    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:24.761383    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:10:24.804866    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:10:24.805027    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:10:24.824219    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:10:24.824320    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:10:24.838773    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:10:24.838863    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:10:24.850572    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:10:24.850663    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:10:24.862418    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:10:24.862516    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:10:24.874216    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:10:24.874295    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:10:24.885217    9360 logs.go:282] 0 containers: []
	W1028 05:10:24.885230    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:10:24.885295    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:10:24.898440    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:10:24.898459    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:10:24.898464    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:10:24.913213    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:10:24.913222    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:10:24.930658    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:10:24.930670    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:10:24.949877    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:10:24.949886    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:10:24.962074    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:10:24.962086    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:10:24.979922    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:10:24.979933    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:10:25.014385    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:10:25.014398    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:10:25.025826    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:10:25.025837    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:10:25.040985    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:10:25.040996    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:10:25.056395    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:10:25.056406    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:10:25.072888    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:10:25.072898    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:10:25.090723    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:10:25.090733    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:10:25.105684    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:10:25.105696    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:10:25.110564    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:10:25.110572    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:10:25.132820    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:10:25.132830    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:10:25.149224    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:10:25.149237    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:10:25.161333    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:10:25.161344    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:10:27.699306    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:32.701529    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:32.701835    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:10:32.726458    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:10:32.726576    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:10:32.742685    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:10:32.742782    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:10:32.755910    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:10:32.755991    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:10:32.766950    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:10:32.767023    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:10:32.777787    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:10:32.777863    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:10:32.788145    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:10:32.788221    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:10:32.798489    9360 logs.go:282] 0 containers: []
	W1028 05:10:32.798508    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:10:32.798583    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:10:32.814530    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:10:32.814551    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:10:32.814558    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:10:32.819941    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:10:32.819955    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:10:32.860155    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:10:32.860166    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:10:32.873958    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:10:32.873969    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:10:32.889411    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:10:32.889422    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:10:32.900677    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:10:32.900692    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:10:32.918960    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:10:32.918970    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:10:32.934411    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:10:32.934422    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:10:32.956923    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:10:32.956933    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:10:32.992575    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:10:32.992585    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:10:33.006987    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:10:33.006998    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:10:33.018524    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:10:33.018535    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:10:33.037469    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:10:33.037480    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:10:33.055358    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:10:33.055370    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:10:33.066752    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:10:33.066763    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:10:33.078767    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:10:33.078781    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:10:33.090382    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:10:33.090395    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:10:35.604437    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:40.606527    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:40.606642    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:10:40.621803    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:10:40.621886    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:10:40.634025    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:10:40.634134    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:10:40.645273    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:10:40.645347    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:10:40.656922    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:10:40.657005    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:10:40.667510    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:10:40.667591    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:10:40.678470    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:10:40.678555    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:10:40.689496    9360 logs.go:282] 0 containers: []
	W1028 05:10:40.689506    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:10:40.689581    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:10:40.701381    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:10:40.701396    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:10:40.701401    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:10:40.718463    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:10:40.718477    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:10:40.731206    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:10:40.731217    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:10:40.769773    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:10:40.769783    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:10:40.791323    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:10:40.791337    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:10:40.814234    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:10:40.814250    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:10:40.818795    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:10:40.818803    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:10:40.831068    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:10:40.831078    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:10:40.843091    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:10:40.843102    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:10:40.857085    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:10:40.857098    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:10:40.869157    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:10:40.869167    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:10:40.880245    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:10:40.880256    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:10:40.891778    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:10:40.891793    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:10:40.909708    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:10:40.909718    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:10:40.927439    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:10:40.927450    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:10:40.939363    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:10:40.939375    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:10:40.974747    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:10:40.974760    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:10:43.492230    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:48.494617    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:48.494860    9360 kubeadm.go:597] duration metric: took 4m4.131999458s to restartPrimaryControlPlane
	W1028 05:10:48.495050    9360 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 05:10:48.495115    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1028 05:10:49.513490    9360 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.018382167s)
	I1028 05:10:49.513577    9360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 05:10:49.518544    9360 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 05:10:49.521493    9360 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 05:10:49.524208    9360 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 05:10:49.524215    9360 kubeadm.go:157] found existing configuration files:
	
	I1028 05:10:49.524244    9360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58030 /etc/kubernetes/admin.conf
	I1028 05:10:49.526680    9360 kubeadm.go:163] "https://control-plane.minikube.internal:58030" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:58030 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 05:10:49.526708    9360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 05:10:49.529691    9360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58030 /etc/kubernetes/kubelet.conf
	I1028 05:10:49.532309    9360 kubeadm.go:163] "https://control-plane.minikube.internal:58030" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:58030 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 05:10:49.532336    9360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 05:10:49.534902    9360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58030 /etc/kubernetes/controller-manager.conf
	I1028 05:10:49.537935    9360 kubeadm.go:163] "https://control-plane.minikube.internal:58030" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:58030 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 05:10:49.537967    9360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 05:10:49.540660    9360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58030 /etc/kubernetes/scheduler.conf
	I1028 05:10:49.543105    9360 kubeadm.go:163] "https://control-plane.minikube.internal:58030" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:58030 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 05:10:49.543129    9360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 05:10:49.546094    9360 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 05:10:49.563231    9360 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1028 05:10:49.563327    9360 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 05:10:49.619480    9360 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 05:10:49.619542    9360 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 05:10:49.619676    9360 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 05:10:49.668674    9360 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 05:10:49.673714    9360 out.go:235]   - Generating certificates and keys ...
	I1028 05:10:49.673747    9360 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 05:10:49.673774    9360 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 05:10:49.673813    9360 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 05:10:49.673840    9360 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 05:10:49.673909    9360 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 05:10:49.673937    9360 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 05:10:49.674042    9360 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 05:10:49.674078    9360 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 05:10:49.674120    9360 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 05:10:49.674162    9360 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 05:10:49.674184    9360 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 05:10:49.674211    9360 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 05:10:49.820116    9360 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 05:10:49.904835    9360 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 05:10:50.056021    9360 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 05:10:50.135922    9360 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 05:10:50.164370    9360 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 05:10:50.165482    9360 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 05:10:50.165505    9360 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 05:10:50.258018    9360 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 05:10:50.261414    9360 out.go:235]   - Booting up control plane ...
	I1028 05:10:50.261460    9360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 05:10:50.261495    9360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 05:10:50.261525    9360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 05:10:50.261562    9360 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 05:10:50.261629    9360 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 05:10:54.756230    9360 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501966 seconds
	I1028 05:10:54.756308    9360 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 05:10:54.760027    9360 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 05:10:55.285778    9360 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 05:10:55.286253    9360 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-581000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 05:10:55.789148    9360 kubeadm.go:310] [bootstrap-token] Using token: oagvam.5bwhpiu7oekjyo1h
	I1028 05:10:55.795437    9360 out.go:235]   - Configuring RBAC rules ...
	I1028 05:10:55.795485    9360 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 05:10:55.795519    9360 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 05:10:55.801938    9360 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 05:10:55.802843    9360 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 05:10:55.803606    9360 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 05:10:55.804616    9360 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 05:10:55.807957    9360 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 05:10:56.003112    9360 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 05:10:56.192981    9360 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 05:10:56.193370    9360 kubeadm.go:310] 
	I1028 05:10:56.193398    9360 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 05:10:56.193403    9360 kubeadm.go:310] 
	I1028 05:10:56.193444    9360 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 05:10:56.193448    9360 kubeadm.go:310] 
	I1028 05:10:56.193459    9360 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 05:10:56.193485    9360 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 05:10:56.193514    9360 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 05:10:56.193563    9360 kubeadm.go:310] 
	I1028 05:10:56.193633    9360 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 05:10:56.193646    9360 kubeadm.go:310] 
	I1028 05:10:56.193694    9360 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 05:10:56.193697    9360 kubeadm.go:310] 
	I1028 05:10:56.193723    9360 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 05:10:56.193755    9360 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 05:10:56.193798    9360 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 05:10:56.193801    9360 kubeadm.go:310] 
	I1028 05:10:56.193840    9360 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 05:10:56.193900    9360 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 05:10:56.193905    9360 kubeadm.go:310] 
	I1028 05:10:56.193983    9360 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token oagvam.5bwhpiu7oekjyo1h \
	I1028 05:10:56.194063    9360 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f88359f6e177cdd7e99d84e4e5a9c564acd9fd4ce77f443c1fef5fb70f89e325 \
	I1028 05:10:56.194091    9360 kubeadm.go:310] 	--control-plane 
	I1028 05:10:56.194096    9360 kubeadm.go:310] 
	I1028 05:10:56.194135    9360 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 05:10:56.194137    9360 kubeadm.go:310] 
	I1028 05:10:56.194174    9360 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token oagvam.5bwhpiu7oekjyo1h \
	I1028 05:10:56.194259    9360 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f88359f6e177cdd7e99d84e4e5a9c564acd9fd4ce77f443c1fef5fb70f89e325 
	I1028 05:10:56.194323    9360 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 05:10:56.194335    9360 cni.go:84] Creating CNI manager for ""
	I1028 05:10:56.194346    9360 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 05:10:56.197218    9360 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 05:10:56.205226    9360 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 05:10:56.208663    9360 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 05:10:56.214425    9360 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 05:10:56.214500    9360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 05:10:56.214503    9360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-581000 minikube.k8s.io/updated_at=2024_10_28T05_10_56_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f minikube.k8s.io/name=running-upgrade-581000 minikube.k8s.io/primary=true
	I1028 05:10:56.255279    9360 ops.go:34] apiserver oom_adj: -16
	I1028 05:10:56.255274    9360 kubeadm.go:1113] duration metric: took 40.83775ms to wait for elevateKubeSystemPrivileges
	I1028 05:10:56.255359    9360 kubeadm.go:394] duration metric: took 4m11.906103292s to StartCluster
	I1028 05:10:56.255371    9360 settings.go:142] acquiring lock: {Name:mka2e81574940ea53fced239aa2ef4cd7479a0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:10:56.255565    9360 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:10:56.255994    9360 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/kubeconfig: {Name:mk90a124f6c448e81120cf90ba82d6374e9cd851 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:10:56.256231    9360 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:10:56.256254    9360 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 05:10:56.256297    9360 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-581000"
	I1028 05:10:56.256309    9360 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-581000"
	W1028 05:10:56.256312    9360 addons.go:243] addon storage-provisioner should already be in state true
	I1028 05:10:56.256327    9360 host.go:66] Checking if "running-upgrade-581000" exists ...
	I1028 05:10:56.256345    9360 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-581000"
	I1028 05:10:56.256355    9360 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-581000"
	I1028 05:10:56.256418    9360 config.go:182] Loaded profile config "running-upgrade-581000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 05:10:56.257325    9360 kapi.go:59] client config for running-upgrade-581000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/running-upgrade-581000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/running-upgrade-581000/client.key", CAFile:"/Users/jenkins/minikube-integration/19875-6942/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104b56680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 05:10:56.257948    9360 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-581000"
	W1028 05:10:56.257953    9360 addons.go:243] addon default-storageclass should already be in state true
	I1028 05:10:56.257960    9360 host.go:66] Checking if "running-upgrade-581000" exists ...
	I1028 05:10:56.260226    9360 out.go:177] * Verifying Kubernetes components...
	I1028 05:10:56.260605    9360 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 05:10:56.266365    9360 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 05:10:56.266374    9360 sshutil.go:53] new ssh client: &{IP:localhost Port:57998 SSHKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/running-upgrade-581000/id_rsa Username:docker}
	I1028 05:10:56.270156    9360 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 05:10:56.274230    9360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 05:10:56.277166    9360 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 05:10:56.277172    9360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 05:10:56.277177    9360 sshutil.go:53] new ssh client: &{IP:localhost Port:57998 SSHKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/running-upgrade-581000/id_rsa Username:docker}
	I1028 05:10:56.355049    9360 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 05:10:56.360827    9360 api_server.go:52] waiting for apiserver process to appear ...
	I1028 05:10:56.360893    9360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 05:10:56.365288    9360 api_server.go:72] duration metric: took 109.047ms to wait for apiserver process to appear ...
	I1028 05:10:56.365296    9360 api_server.go:88] waiting for apiserver healthz status ...
	I1028 05:10:56.365303    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:56.406601    9360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 05:10:56.433960    9360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 05:10:56.770943    9360 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 05:10:56.770957    9360 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 05:11:01.367323    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:01.367382    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:06.367639    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:06.367681    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:11.368057    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:11.368079    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:16.368468    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:16.368541    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:21.369151    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:21.369189    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:26.369959    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:26.370012    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1028 05:11:26.772677    9360 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1028 05:11:26.776945    9360 out.go:177] * Enabled addons: storage-provisioner
	I1028 05:11:26.783771    9360 addons.go:510] duration metric: took 30.528183458s for enable addons: enabled=[storage-provisioner]
	I1028 05:11:31.371051    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:31.371149    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:36.373278    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:36.373323    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:41.375345    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:41.375396    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:46.376580    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:46.376667    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:51.377278    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:51.377363    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:56.379764    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:56.380003    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:11:56.403396    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:11:56.403486    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:11:56.417992    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:11:56.418072    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:11:56.433375    9360 logs.go:282] 2 containers: [47a579d7d206 f9e74904e5af]
	I1028 05:11:56.433469    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:11:56.456555    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:11:56.456640    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:11:56.470354    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:11:56.470432    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:11:56.481064    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:11:56.481142    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:11:56.496764    9360 logs.go:282] 0 containers: []
	W1028 05:11:56.496779    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:11:56.496840    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:11:56.507711    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:11:56.507726    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:11:56.507731    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:11:56.523925    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:11:56.523937    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:11:56.535490    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:11:56.535502    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:11:56.559029    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:11:56.559037    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:11:56.574623    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:11:56.574634    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:11:56.579321    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:11:56.579329    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:11:56.618026    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:11:56.618037    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:11:56.632970    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:11:56.632983    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:11:56.647577    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:11:56.647589    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:11:56.659572    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:11:56.659581    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:11:56.674262    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:11:56.674275    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:11:56.692160    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:11:56.692172    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:11:56.730056    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:11:56.730066    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:11:59.243663    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:04.246026    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:04.246225    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:04.263436    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:12:04.263530    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:04.275911    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:12:04.275996    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:04.286825    9360 logs.go:282] 2 containers: [47a579d7d206 f9e74904e5af]
	I1028 05:12:04.286899    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:04.297524    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:12:04.297593    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:04.307918    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:12:04.307996    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:04.317974    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:12:04.318047    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:04.331874    9360 logs.go:282] 0 containers: []
	W1028 05:12:04.331889    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:04.331959    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:04.342121    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:12:04.342135    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:12:04.342142    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:12:04.357260    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:12:04.357271    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:12:04.368736    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:04.368746    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:04.392551    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:04.392562    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:04.427611    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:04.427618    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:04.461863    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:12:04.461874    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:12:04.478714    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:12:04.478725    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:12:04.493106    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:12:04.493115    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:12:04.504631    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:12:04.504645    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:12:04.516291    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:12:04.516306    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:12:04.528359    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:12:04.528370    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:12:04.549889    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:04.549904    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:04.555037    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:12:04.555044    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:07.069754    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:12.072416    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:12.072723    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:12.103270    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:12:12.103412    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:12.122704    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:12:12.122803    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:12.135593    9360 logs.go:282] 2 containers: [47a579d7d206 f9e74904e5af]
	I1028 05:12:12.135678    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:12.146847    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:12:12.146916    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:12.157655    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:12:12.157725    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:12.168295    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:12:12.168367    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:12.187948    9360 logs.go:282] 0 containers: []
	W1028 05:12:12.187961    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:12.188029    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:12.198184    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:12:12.198199    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:12.198205    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:12.203103    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:12:12.203111    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:12:12.214749    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:12:12.214760    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:12:12.229570    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:12:12.229580    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:12.242344    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:12.242355    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:12.276263    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:12.276272    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:12.310521    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:12:12.310531    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:12:12.324970    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:12:12.324980    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:12:12.338935    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:12:12.338947    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:12:12.351131    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:12:12.351140    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:12:12.363763    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:12:12.363773    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:12:12.381199    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:12:12.381210    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:12:12.397483    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:12.397493    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:14.925283    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:19.927529    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:19.927744    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:19.943970    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:12:19.944062    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:19.957249    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:12:19.957329    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:19.968880    9360 logs.go:282] 2 containers: [47a579d7d206 f9e74904e5af]
	I1028 05:12:19.968950    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:19.979290    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:12:19.979367    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:19.989600    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:12:19.989677    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:20.000161    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:12:20.000237    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:20.015468    9360 logs.go:282] 0 containers: []
	W1028 05:12:20.015482    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:20.015544    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:20.026036    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:12:20.026050    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:20.026057    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:20.030712    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:12:20.030718    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:12:20.044962    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:12:20.044972    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:12:20.057340    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:12:20.057354    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:12:20.072630    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:12:20.072640    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:12:20.084317    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:12:20.084327    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:12:20.101148    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:12:20.101158    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:12:20.113805    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:20.113817    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:20.149914    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:12:20.149925    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:20.161431    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:20.161443    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:20.186418    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:12:20.186424    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:12:20.200536    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:12:20.200547    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:12:20.213258    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:20.213268    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:22.751898    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:27.754053    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:27.754243    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:27.768150    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:12:27.768245    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:27.779647    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:12:27.779724    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:27.795604    9360 logs.go:282] 2 containers: [47a579d7d206 f9e74904e5af]
	I1028 05:12:27.795683    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:27.805875    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:12:27.805963    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:27.816553    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:12:27.816631    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:27.826849    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:12:27.826920    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:27.837259    9360 logs.go:282] 0 containers: []
	W1028 05:12:27.837269    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:27.837330    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:27.847842    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:12:27.847859    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:12:27.847864    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:12:27.862204    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:12:27.862214    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:12:27.874386    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:12:27.874398    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:12:27.885673    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:12:27.885684    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:12:27.900578    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:12:27.900590    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:12:27.917856    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:12:27.917866    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:12:27.930169    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:27.930178    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:27.956085    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:27.956095    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:27.992522    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:27.992535    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:27.997662    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:27.997669    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:28.039130    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:12:28.039144    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:12:28.054046    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:12:28.054056    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:12:28.064981    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:12:28.064995    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:30.580309    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:35.582551    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:35.582796    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:35.602208    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:12:35.602318    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:35.616708    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:12:35.616801    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:35.628926    9360 logs.go:282] 2 containers: [47a579d7d206 f9e74904e5af]
	I1028 05:12:35.629015    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:35.639863    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:12:35.639941    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:35.650064    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:12:35.650137    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:35.660373    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:12:35.660442    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:35.670595    9360 logs.go:282] 0 containers: []
	W1028 05:12:35.670608    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:35.670677    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:35.680671    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:12:35.680686    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:35.680692    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:35.723495    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:12:35.723508    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:12:35.746425    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:12:35.746436    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:12:35.758586    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:12:35.758600    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:12:35.771456    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:12:35.771470    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:12:35.782864    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:12:35.782875    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:12:35.795908    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:35.795920    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:35.831122    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:35.831136    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:35.835782    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:35.835791    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:35.860579    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:12:35.860588    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:12:35.878254    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:12:35.878263    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:35.889862    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:12:35.889871    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:12:35.904313    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:12:35.904323    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:12:38.423721    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:43.425907    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:43.426096    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:43.446566    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:12:43.446649    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:43.458342    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:12:43.458416    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:43.469541    9360 logs.go:282] 2 containers: [47a579d7d206 f9e74904e5af]
	I1028 05:12:43.469621    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:43.487112    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:12:43.487178    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:43.497466    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:12:43.497530    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:43.508369    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:12:43.508448    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:43.518512    9360 logs.go:282] 0 containers: []
	W1028 05:12:43.518523    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:43.518581    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:43.529323    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:12:43.529338    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:12:43.529344    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:12:43.541512    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:12:43.541527    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:12:43.562192    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:12:43.562207    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:12:43.574289    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:43.574301    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:43.578615    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:12:43.578623    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:12:43.593181    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:12:43.593195    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:12:43.605276    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:12:43.605289    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:12:43.623011    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:12:43.623026    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:12:43.634321    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:43.634331    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:43.658769    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:12:43.658778    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:43.670155    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:43.670163    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:43.704795    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:43.704802    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:43.739817    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:12:43.739826    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:12:46.256180    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:51.258297    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:51.258429    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:51.271736    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:12:51.271822    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:51.282440    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:12:51.282521    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:51.293045    9360 logs.go:282] 2 containers: [47a579d7d206 f9e74904e5af]
	I1028 05:12:51.293124    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:51.303833    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:12:51.303906    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:51.314062    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:12:51.314141    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:51.324790    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:12:51.324868    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:51.334793    9360 logs.go:282] 0 containers: []
	W1028 05:12:51.334803    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:51.334869    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:51.345414    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:12:51.345433    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:12:51.345438    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:12:51.360288    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:12:51.360299    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:12:51.372330    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:12:51.372339    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:12:51.387088    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:12:51.387099    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:12:51.401666    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:51.401675    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:51.438082    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:12:51.438091    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:12:51.457197    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:12:51.457209    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:12:51.469485    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:12:51.469498    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:12:51.486531    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:12:51.486541    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:12:51.497681    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:51.497692    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:51.522585    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:51.522593    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:51.556394    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:51.556404    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:51.561535    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:12:51.561546    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:54.075351    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:59.077650    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:59.077798    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:59.090181    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:12:59.090265    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:59.100863    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:12:59.100951    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:59.112856    9360 logs.go:282] 2 containers: [47a579d7d206 f9e74904e5af]
	I1028 05:12:59.112931    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:59.127668    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:12:59.127743    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:59.138270    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:12:59.138347    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:59.148646    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:12:59.148723    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:59.159215    9360 logs.go:282] 0 containers: []
	W1028 05:12:59.159226    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:59.159283    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:59.169216    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:12:59.169230    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:12:59.169236    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:12:59.183642    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:12:59.183654    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:12:59.201926    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:12:59.201938    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:12:59.213761    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:12:59.213771    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:12:59.231174    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:59.231188    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:59.255936    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:59.255943    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:59.292837    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:59.292847    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:59.297408    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:12:59.297418    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:12:59.311470    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:12:59.311480    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:12:59.323370    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:12:59.323381    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:12:59.335057    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:12:59.335068    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:12:59.346109    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:12:59.346118    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:59.357355    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:59.357364    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:01.894626    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:06.897167    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:06.897286    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:06.910897    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:13:06.910983    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:06.923541    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:13:06.923615    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:06.933767    9360 logs.go:282] 2 containers: [47a579d7d206 f9e74904e5af]
	I1028 05:13:06.933845    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:06.944525    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:13:06.944598    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:06.954959    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:13:06.955029    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:06.965197    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:13:06.965270    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:06.975414    9360 logs.go:282] 0 containers: []
	W1028 05:13:06.975427    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:06.975486    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:06.985849    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:13:06.985862    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:13:06.985868    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:13:06.999517    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:13:06.999526    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:13:07.011067    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:13:07.011078    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:13:07.022669    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:13:07.022679    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:13:07.045285    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:07.045297    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:07.070394    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:07.070406    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:07.104856    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:07.104866    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:07.141096    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:13:07.141107    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:13:07.155877    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:13:07.155889    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:13:07.170826    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:13:07.170837    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:13:07.182680    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:13:07.182695    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:13:07.193893    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:13:07.193904    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:07.205890    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:07.205900    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:09.712337    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:14.714535    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:14.714691    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:14.726132    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:13:14.726214    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:14.736514    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:13:14.736587    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:14.747163    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:13:14.747233    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:14.758695    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:13:14.758768    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:14.768927    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:13:14.769030    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:14.779235    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:13:14.779302    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:14.790102    9360 logs.go:282] 0 containers: []
	W1028 05:13:14.790114    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:14.790188    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:14.800485    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:13:14.800502    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:13:14.800508    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:14.812330    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:14.812339    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:14.816970    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:13:14.816976    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:13:14.830858    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:13:14.830868    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:13:14.842418    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:13:14.842429    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:13:14.854092    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:13:14.854105    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:13:14.868927    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:13:14.868938    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:13:14.880791    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:14.880802    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:14.904953    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:13:14.904962    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:13:14.922427    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:14.922437    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:14.955348    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:14.955354    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:14.991280    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:13:14.991295    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:13:15.007660    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:13:15.007670    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:13:15.021749    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:13:15.021759    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:13:15.033165    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:13:15.033177    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:13:17.550489    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:22.552114    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:22.552224    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:22.562781    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:13:22.562859    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:22.573472    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:13:22.573557    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:22.584687    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:13:22.584767    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:22.595573    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:13:22.595644    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:22.615700    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:13:22.615775    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:22.626537    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:13:22.626608    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:22.636830    9360 logs.go:282] 0 containers: []
	W1028 05:13:22.636848    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:22.636907    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:22.647210    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:13:22.647228    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:22.647234    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:22.681847    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:13:22.681857    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:13:22.695655    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:13:22.695671    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:13:22.707602    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:13:22.707612    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:13:22.719053    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:13:22.719063    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:22.732134    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:13:22.732146    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:13:22.747022    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:13:22.747031    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:13:22.758932    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:13:22.758944    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:13:22.771465    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:22.771482    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:22.806117    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:13:22.806130    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:13:22.820670    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:13:22.820685    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:13:22.838492    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:22.838501    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:22.864997    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:22.865007    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:22.869283    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:13:22.869289    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:13:22.880943    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:13:22.880953    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:13:25.394664    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:30.397115    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:30.397211    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:30.408665    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:13:30.408746    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:30.423272    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:13:30.423354    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:30.435201    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:13:30.435285    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:30.448303    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:13:30.448383    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:30.462355    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:13:30.462426    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:30.473298    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:13:30.473368    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:30.483349    9360 logs.go:282] 0 containers: []
	W1028 05:13:30.483368    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:30.483445    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:30.494148    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:13:30.494166    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:13:30.494171    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:13:30.505934    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:30.505944    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:30.541091    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:13:30.541098    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:13:30.554636    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:13:30.554649    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:13:30.572051    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:13:30.572060    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:13:30.583372    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:13:30.583384    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:13:30.595419    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:13:30.595433    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:13:30.610745    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:13:30.610755    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:13:30.626518    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:13:30.626527    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:13:30.642330    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:30.642344    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:30.668046    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:13:30.668053    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:30.679534    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:30.679547    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:30.684324    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:30.684329    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:30.719581    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:13:30.719590    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:13:30.734490    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:13:30.734500    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:13:33.248352    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:38.250458    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:38.250564    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:38.261546    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:13:38.261623    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:38.272525    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:13:38.272606    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:38.284482    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:13:38.284567    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:38.295381    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:13:38.295459    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:38.309049    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:13:38.309179    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:38.320413    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:13:38.320496    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:38.331701    9360 logs.go:282] 0 containers: []
	W1028 05:13:38.331711    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:38.331774    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:38.344417    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:13:38.344434    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:38.344439    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:38.380544    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:13:38.380556    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:13:38.408643    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:38.408653    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:38.433612    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:38.433619    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:38.467858    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:13:38.467871    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:13:38.481588    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:13:38.481604    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:13:38.493103    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:13:38.493113    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:13:38.504898    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:13:38.504914    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:38.516494    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:13:38.516509    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:13:38.530724    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:13:38.530734    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:13:38.542661    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:13:38.542671    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:13:38.561676    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:13:38.561686    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:13:38.576395    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:38.576405    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:38.581042    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:13:38.581049    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:13:38.595770    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:13:38.595780    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:13:41.115406    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:46.117862    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:46.117964    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:46.134110    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:13:46.134194    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:46.146879    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:13:46.146965    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:46.158817    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:13:46.158906    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:46.170101    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:13:46.170192    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:46.182373    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:13:46.182450    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:46.194456    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:13:46.194544    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:46.205993    9360 logs.go:282] 0 containers: []
	W1028 05:13:46.206006    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:46.206078    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:46.217351    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:13:46.217371    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:13:46.217376    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:13:46.232263    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:13:46.232278    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:13:46.244666    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:13:46.244678    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:13:46.260684    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:46.260698    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:46.295765    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:13:46.295781    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:46.309292    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:46.309304    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:46.313905    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:13:46.313915    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:13:46.329065    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:13:46.329086    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:13:46.342299    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:13:46.342310    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:13:46.360617    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:46.360631    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:46.396618    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:13:46.396627    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:13:46.408925    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:13:46.408939    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:13:46.420677    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:13:46.420691    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:13:46.439504    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:13:46.439515    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:13:46.450945    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:46.450955    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:48.979016    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:53.980079    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:53.980179    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:53.992976    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:13:53.993058    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:54.007898    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:13:54.007990    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:54.021042    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:13:54.021126    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:54.032559    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:13:54.032634    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:54.043785    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:13:54.043861    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:54.054734    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:13:54.054812    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:54.066435    9360 logs.go:282] 0 containers: []
	W1028 05:13:54.066447    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:54.066516    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:54.077750    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:13:54.077767    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:13:54.077772    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:13:54.095484    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:13:54.095495    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:13:54.118334    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:54.118348    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:54.156439    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:54.156457    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:54.193942    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:13:54.193953    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:13:54.209382    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:13:54.209393    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:13:54.222472    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:13:54.222485    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:13:54.235087    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:13:54.235097    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:13:54.256374    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:13:54.256388    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:13:54.270528    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:54.270542    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:54.296138    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:54.296146    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:54.300423    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:13:54.300428    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:13:54.312398    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:13:54.312408    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:13:54.324969    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:13:54.324978    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:13:54.336649    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:13:54.336658    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:56.852244    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:01.854777    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:01.854871    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:14:01.865959    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:14:01.866043    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:14:01.877854    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:14:01.877932    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:14:01.891561    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:14:01.891643    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:14:01.903739    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:14:01.903822    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:14:01.917025    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:14:01.917102    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:14:01.928694    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:14:01.928776    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:14:01.939776    9360 logs.go:282] 0 containers: []
	W1028 05:14:01.939788    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:14:01.939863    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:14:01.951286    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:14:01.951304    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:14:01.951309    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:14:01.963954    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:14:01.963964    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:14:01.976031    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:14:01.976041    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:14:01.993956    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:14:01.993970    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:14:02.019446    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:14:02.019459    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:14:02.024145    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:14:02.024151    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:14:02.039248    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:14:02.039262    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:14:02.051832    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:14:02.051846    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:14:02.067704    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:14:02.067718    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:14:02.085495    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:14:02.085506    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:14:02.099312    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:14:02.099325    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:14:02.146716    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:14:02.146730    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:14:02.161630    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:14:02.161645    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:14:02.174435    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:14:02.174450    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:14:02.209892    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:14:02.209901    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:14:04.723610    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:09.725679    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:09.725749    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:14:09.736762    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:14:09.736835    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:14:09.747731    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:14:09.747808    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:14:09.759072    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:14:09.759148    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:14:09.769918    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:14:09.769997    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:14:09.783580    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:14:09.783658    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:14:09.795528    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:14:09.795608    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:14:09.806890    9360 logs.go:282] 0 containers: []
	W1028 05:14:09.806903    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:14:09.806969    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:14:09.818297    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:14:09.818330    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:14:09.818337    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:14:09.832702    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:14:09.832715    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:14:09.844633    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:14:09.844644    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:14:09.857054    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:14:09.857065    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:14:09.872566    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:14:09.872577    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:14:09.891309    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:14:09.891322    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:14:09.917316    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:14:09.917325    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:14:09.922387    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:14:09.922399    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:14:09.940811    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:14:09.940823    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:14:09.953889    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:14:09.953902    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:14:09.966903    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:14:09.966915    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:14:09.987480    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:14:09.987489    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:14:10.003211    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:14:10.003222    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:14:10.016239    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:14:10.016249    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:14:10.052654    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:14:10.052669    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:14:12.590689    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:17.591997    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:17.592118    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:14:17.613072    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:14:17.613162    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:14:17.625683    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:14:17.625773    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:14:17.636953    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:14:17.637056    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:14:17.648755    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:14:17.648834    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:14:17.660692    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:14:17.660770    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:14:17.672593    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:14:17.672671    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:14:17.692420    9360 logs.go:282] 0 containers: []
	W1028 05:14:17.692434    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:14:17.692498    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:14:17.703623    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:14:17.703643    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:14:17.703649    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:14:17.722739    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:14:17.722754    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:14:17.760040    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:14:17.760054    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:14:17.774915    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:14:17.774930    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:14:17.787639    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:14:17.787652    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:14:17.801048    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:14:17.801062    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:14:17.813939    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:14:17.813956    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:14:17.827163    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:14:17.827178    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:14:17.842848    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:14:17.842862    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:14:17.847554    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:14:17.847563    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:14:17.859822    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:14:17.859833    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:14:17.885881    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:14:17.885893    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:14:17.922983    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:14:17.922998    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:14:17.938776    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:14:17.938792    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:14:17.952366    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:14:17.952379    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:14:20.467105    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:25.468194    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:25.468596    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:14:25.517113    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:14:25.517222    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:14:25.535597    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:14:25.535701    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:14:25.551464    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:14:25.551552    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:14:25.563236    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:14:25.563311    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:14:25.575597    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:14:25.575675    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:14:25.587320    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:14:25.587395    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:14:25.598545    9360 logs.go:282] 0 containers: []
	W1028 05:14:25.598558    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:14:25.598622    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:14:25.611214    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:14:25.611234    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:14:25.611240    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:14:25.645117    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:14:25.645131    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:14:25.657065    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:14:25.657078    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:14:25.672535    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:14:25.672557    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:14:25.685607    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:14:25.685619    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:14:25.690863    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:14:25.690875    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:14:25.730215    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:14:25.730228    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:14:25.745810    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:14:25.745823    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:14:25.758319    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:14:25.758333    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:14:25.772191    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:14:25.772204    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:14:25.785580    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:14:25.785593    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:14:25.798142    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:14:25.798154    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:14:25.824352    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:14:25.824368    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:14:25.840058    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:14:25.840069    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:14:25.852418    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:14:25.852431    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:14:28.372470    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:33.372635    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:33.372829    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:14:33.386765    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:14:33.386846    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:14:33.397524    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:14:33.397600    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:14:33.408517    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:14:33.408601    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:14:33.423301    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:14:33.423382    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:14:33.433445    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:14:33.433517    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:14:33.444301    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:14:33.444373    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:14:33.454328    9360 logs.go:282] 0 containers: []
	W1028 05:14:33.454339    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:14:33.454406    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:14:33.472771    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:14:33.472788    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:14:33.472794    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:14:33.509580    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:14:33.509591    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:14:33.524375    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:14:33.524402    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:14:33.539396    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:14:33.539404    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:14:33.557256    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:14:33.557267    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:14:33.590469    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:14:33.590477    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:14:33.604354    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:14:33.604364    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:14:33.615857    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:14:33.615869    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:14:33.627829    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:14:33.627840    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:14:33.639462    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:14:33.639471    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:14:33.652075    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:14:33.652086    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:14:33.663578    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:14:33.663588    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:14:33.702820    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:14:33.702832    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:14:33.711669    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:14:33.711680    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:14:33.726326    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:14:33.726337    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:14:36.253226    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:41.255509    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:41.255693    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:14:41.267667    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:14:41.267754    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:14:41.278075    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:14:41.278159    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:14:41.288715    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:14:41.288790    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:14:41.303383    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:14:41.303464    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:14:41.313802    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:14:41.313888    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:14:41.324059    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:14:41.324136    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:14:41.334974    9360 logs.go:282] 0 containers: []
	W1028 05:14:41.334984    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:14:41.335050    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:14:41.345114    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:14:41.345131    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:14:41.345136    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:14:41.360877    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:14:41.360887    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:14:41.381309    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:14:41.381320    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:14:41.394225    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:14:41.394237    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:14:41.428821    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:14:41.428832    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:14:41.443299    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:14:41.443309    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:14:41.454781    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:14:41.454793    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:14:41.468343    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:14:41.468352    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:14:41.491888    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:14:41.491896    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:14:41.503747    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:14:41.503758    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:14:41.537158    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:14:41.537165    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:14:41.541641    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:14:41.541652    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:14:41.553545    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:14:41.553559    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:14:41.567561    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:14:41.567570    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:14:41.579259    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:14:41.579270    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:14:44.095563    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:49.097649    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:49.097757    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:14:49.109544    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:14:49.109627    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:14:49.120005    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:14:49.120080    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:14:49.131759    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:14:49.131840    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:14:49.141854    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:14:49.141932    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:14:49.151698    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:14:49.151768    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:14:49.162380    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:14:49.162453    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:14:49.172428    9360 logs.go:282] 0 containers: []
	W1028 05:14:49.172442    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:14:49.172505    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:14:49.183471    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:14:49.183488    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:14:49.183494    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:14:49.220364    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:14:49.220375    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:14:49.232720    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:14:49.232730    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:14:49.244589    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:14:49.244599    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:14:49.257363    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:14:49.257374    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:14:49.275028    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:14:49.275041    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:14:49.286923    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:14:49.286936    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:14:49.291441    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:14:49.291450    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:14:49.305753    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:14:49.305766    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:14:49.317629    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:14:49.317641    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:14:49.329775    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:14:49.329787    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:14:49.344684    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:14:49.344695    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:14:49.356349    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:14:49.356361    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:14:49.390384    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:14:49.390397    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:14:49.409728    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:14:49.409745    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:14:51.936962    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:56.939170    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:56.943617    9360 out.go:201] 
	W1028 05:14:56.946508    9360 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1028 05:14:56.946514    9360 out.go:270] * 
	* 
	W1028 05:14:56.946965    9360 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:14:56.958510    9360 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-581000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-10-28 05:14:57.051817 -0700 PDT m=+1256.502908126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-581000 -n running-upgrade-581000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-581000 -n running-upgrade-581000: exit status 2 (15.68939675s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-581000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-219000          | force-systemd-flag-219000 | jenkins | v1.34.0 | 28 Oct 24 05:05 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-564000              | force-systemd-env-564000  | jenkins | v1.34.0 | 28 Oct 24 05:05 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-564000           | force-systemd-env-564000  | jenkins | v1.34.0 | 28 Oct 24 05:05 PDT | 28 Oct 24 05:05 PDT |
	| start   | -p docker-flags-624000                | docker-flags-624000       | jenkins | v1.34.0 | 28 Oct 24 05:05 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-219000             | force-systemd-flag-219000 | jenkins | v1.34.0 | 28 Oct 24 05:05 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-219000          | force-systemd-flag-219000 | jenkins | v1.34.0 | 28 Oct 24 05:05 PDT | 28 Oct 24 05:05 PDT |
	| start   | -p cert-expiration-512000             | cert-expiration-512000    | jenkins | v1.34.0 | 28 Oct 24 05:05 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-624000 ssh               | docker-flags-624000       | jenkins | v1.34.0 | 28 Oct 24 05:05 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-624000 ssh               | docker-flags-624000       | jenkins | v1.34.0 | 28 Oct 24 05:05 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-624000                | docker-flags-624000       | jenkins | v1.34.0 | 28 Oct 24 05:05 PDT | 28 Oct 24 05:05 PDT |
	| start   | -p cert-options-736000                | cert-options-736000       | jenkins | v1.34.0 | 28 Oct 24 05:05 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-736000 ssh               | cert-options-736000       | jenkins | v1.34.0 | 28 Oct 24 05:05 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-736000 -- sudo        | cert-options-736000       | jenkins | v1.34.0 | 28 Oct 24 05:05 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-736000                | cert-options-736000       | jenkins | v1.34.0 | 28 Oct 24 05:05 PDT | 28 Oct 24 05:05 PDT |
	| start   | -p running-upgrade-581000             | minikube                  | jenkins | v1.26.0 | 28 Oct 24 05:05 PDT | 28 Oct 24 05:06 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-581000             | running-upgrade-581000    | jenkins | v1.34.0 | 28 Oct 24 05:06 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-512000             | cert-expiration-512000    | jenkins | v1.34.0 | 28 Oct 24 05:08 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-512000             | cert-expiration-512000    | jenkins | v1.34.0 | 28 Oct 24 05:08 PDT | 28 Oct 24 05:08 PDT |
	| start   | -p kubernetes-upgrade-845000          | kubernetes-upgrade-845000 | jenkins | v1.34.0 | 28 Oct 24 05:08 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-845000          | kubernetes-upgrade-845000 | jenkins | v1.34.0 | 28 Oct 24 05:08 PDT | 28 Oct 24 05:08 PDT |
	| start   | -p kubernetes-upgrade-845000          | kubernetes-upgrade-845000 | jenkins | v1.34.0 | 28 Oct 24 05:08 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-845000          | kubernetes-upgrade-845000 | jenkins | v1.34.0 | 28 Oct 24 05:08 PDT | 28 Oct 24 05:09 PDT |
	| start   | -p stopped-upgrade-451000             | minikube                  | jenkins | v1.26.0 | 28 Oct 24 05:09 PDT | 28 Oct 24 05:09 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-451000 stop           | minikube                  | jenkins | v1.26.0 | 28 Oct 24 05:09 PDT | 28 Oct 24 05:09 PDT |
	| start   | -p stopped-upgrade-451000             | stopped-upgrade-451000    | jenkins | v1.34.0 | 28 Oct 24 05:09 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 05:09:53
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 05:09:53.650599    9481 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:09:53.650806    9481 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:09:53.650809    9481 out.go:358] Setting ErrFile to fd 2...
	I1028 05:09:53.650812    9481 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:09:53.650939    9481 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:09:53.652033    9481 out.go:352] Setting JSON to false
	I1028 05:09:53.670967    9481 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5964,"bootTime":1730111429,"procs":518,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:09:53.671041    9481 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:09:53.675662    9481 out.go:177] * [stopped-upgrade-451000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:09:53.683627    9481 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:09:53.683681    9481 notify.go:220] Checking for updates...
	I1028 05:09:53.689639    9481 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:09:53.692635    9481 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:09:53.695602    9481 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:09:53.698688    9481 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:09:53.701641    9481 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:09:53.704848    9481 config.go:182] Loaded profile config "stopped-upgrade-451000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 05:09:53.707646    9481 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1028 05:09:53.710526    9481 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:09:53.714624    9481 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 05:09:53.721527    9481 start.go:297] selected driver: qemu2
	I1028 05:09:53.721532    9481 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-451000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:58252 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-451000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1028 05:09:53.721579    9481 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:09:53.724347    9481 cni.go:84] Creating CNI manager for ""
	I1028 05:09:53.724380    9481 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 05:09:53.724411    9481 start.go:340] cluster config:
	{Name:stopped-upgrade-451000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:58252 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-451000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1028 05:09:53.724469    9481 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:09:53.731564    9481 out.go:177] * Starting "stopped-upgrade-451000" primary control-plane node in "stopped-upgrade-451000" cluster
	I1028 05:09:53.735597    9481 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1028 05:09:53.735615    9481 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1028 05:09:53.735622    9481 cache.go:56] Caching tarball of preloaded images
	I1028 05:09:53.735680    9481 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:09:53.735689    9481 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1028 05:09:53.735743    9481 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/config.json ...
	I1028 05:09:53.736158    9481 start.go:360] acquireMachinesLock for stopped-upgrade-451000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:09:53.736190    9481 start.go:364] duration metric: took 25.334µs to acquireMachinesLock for "stopped-upgrade-451000"
	I1028 05:09:53.736198    9481 start.go:96] Skipping create...Using existing machine configuration
	I1028 05:09:53.736203    9481 fix.go:54] fixHost starting: 
	I1028 05:09:53.736316    9481 fix.go:112] recreateIfNeeded on stopped-upgrade-451000: state=Stopped err=<nil>
	W1028 05:09:53.736325    9481 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 05:09:53.739545    9481 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-451000" ...
	I1028 05:09:53.104898    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:09:53.105121    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:09:53.127595    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:09:53.127698    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:09:53.140344    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:09:53.140432    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:09:53.152956    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:09:53.153047    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:09:53.163592    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:09:53.163679    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:09:53.175988    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:09:53.176067    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:09:53.186292    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:09:53.186371    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:09:53.196806    9360 logs.go:282] 0 containers: []
	W1028 05:09:53.196816    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:09:53.196878    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:09:53.207690    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:09:53.207711    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:09:53.207716    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:09:53.224993    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:09:53.225004    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:09:53.236938    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:09:53.236949    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:09:53.273975    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:09:53.273987    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:09:53.288293    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:09:53.288307    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:09:53.299686    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:09:53.299700    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:09:53.311101    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:09:53.311113    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:09:53.326509    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:09:53.326520    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:09:53.364097    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:09:53.364108    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:09:53.382643    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:09:53.382654    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:09:53.398125    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:09:53.398139    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:09:53.421619    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:09:53.421629    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:09:53.433150    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:09:53.433164    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:09:53.437659    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:09:53.437667    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:09:53.451223    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:09:53.451235    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:09:53.462988    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:09:53.462998    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:09:53.474468    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:09:53.474480    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:09:53.747615    9481 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:09:53.747710    9481 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/stopped-upgrade-451000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/stopped-upgrade-451000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/stopped-upgrade-451000/qemu.pid -nic user,model=virtio,hostfwd=tcp::58218-:22,hostfwd=tcp::58219-:2376,hostname=stopped-upgrade-451000 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/stopped-upgrade-451000/disk.qcow2
	I1028 05:09:53.796528    9481 main.go:141] libmachine: STDOUT: 
	I1028 05:09:53.796568    9481 main.go:141] libmachine: STDERR: 
	I1028 05:09:53.796575    9481 main.go:141] libmachine: Waiting for VM to start (ssh -p 58218 docker@127.0.0.1)...
	I1028 05:09:55.999766    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:01.002520    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:01.002739    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:10:01.014803    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:10:01.014882    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:10:01.025932    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:10:01.026030    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:10:01.036826    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:10:01.036901    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:10:01.047920    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:10:01.048010    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:10:01.058722    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:10:01.058808    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:10:01.070054    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:10:01.070134    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:10:01.080232    9360 logs.go:282] 0 containers: []
	W1028 05:10:01.080244    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:10:01.080313    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:10:01.091233    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:10:01.091251    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:10:01.091255    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:10:01.115656    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:10:01.115663    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:10:01.129383    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:10:01.129396    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:10:01.141214    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:10:01.141224    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:10:01.158330    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:10:01.158653    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:10:01.179478    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:10:01.179495    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:10:01.197567    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:10:01.197582    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:10:01.211592    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:10:01.211605    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:10:01.223204    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:10:01.223215    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:10:01.235249    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:10:01.235258    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:10:01.246346    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:10:01.246356    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:10:01.250948    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:10:01.250955    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:10:01.286231    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:10:01.286245    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:10:01.308074    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:10:01.308091    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:10:01.320323    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:10:01.320333    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:10:01.334393    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:10:01.334404    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:10:01.347882    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:10:01.347891    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:10:03.888698    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:08.891009    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:08.891559    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:10:08.931361    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:10:08.931528    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:10:08.955962    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:10:08.956087    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:10:08.977480    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:10:08.977567    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:10:08.988770    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:10:08.988849    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:10:08.999010    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:10:08.999086    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:10:09.009851    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:10:09.009925    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:10:09.025263    9360 logs.go:282] 0 containers: []
	W1028 05:10:09.025274    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:10:09.025343    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:10:09.036001    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:10:09.036019    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:10:09.036024    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:10:09.071712    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:10:09.071725    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:10:09.086032    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:10:09.086044    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:10:09.101670    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:10:09.101681    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:10:09.112676    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:10:09.112691    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:10:09.126276    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:10:09.126285    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:10:09.164328    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:10:09.164342    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:10:09.168662    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:10:09.168668    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:10:09.182000    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:10:09.182010    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:10:09.205611    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:10:09.205619    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:10:09.217555    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:10:09.217568    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:10:09.237533    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:10:09.237545    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:10:09.254677    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:10:09.254687    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:10:09.270296    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:10:09.270309    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:10:09.287776    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:10:09.287786    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:10:09.300539    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:10:09.300552    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:10:09.312047    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:10:09.312057    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:10:11.825529    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:13.699092    9481 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/config.json ...
	I1028 05:10:13.699989    9481 machine.go:93] provisionDockerMachine start ...
	I1028 05:10:13.700237    9481 main.go:141] libmachine: Using SSH client type: native
	I1028 05:10:13.700787    9481 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030165f0] 0x103018e30 <nil>  [] 0s} localhost 58218 <nil> <nil>}
	I1028 05:10:13.700803    9481 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 05:10:13.787533    9481 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 05:10:13.787573    9481 buildroot.go:166] provisioning hostname "stopped-upgrade-451000"
	I1028 05:10:13.787721    9481 main.go:141] libmachine: Using SSH client type: native
	I1028 05:10:13.788012    9481 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030165f0] 0x103018e30 <nil>  [] 0s} localhost 58218 <nil> <nil>}
	I1028 05:10:13.788026    9481 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-451000 && echo "stopped-upgrade-451000" | sudo tee /etc/hostname
	I1028 05:10:13.865332    9481 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-451000
	
	I1028 05:10:13.865448    9481 main.go:141] libmachine: Using SSH client type: native
	I1028 05:10:13.865653    9481 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030165f0] 0x103018e30 <nil>  [] 0s} localhost 58218 <nil> <nil>}
	I1028 05:10:13.865667    9481 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-451000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-451000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-451000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 05:10:13.932635    9481 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 05:10:13.932650    9481 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19875-6942/.minikube CaCertPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19875-6942/.minikube}
	I1028 05:10:13.932669    9481 buildroot.go:174] setting up certificates
	I1028 05:10:13.932676    9481 provision.go:84] configureAuth start
	I1028 05:10:13.932684    9481 provision.go:143] copyHostCerts
	I1028 05:10:13.932753    9481 exec_runner.go:144] found /Users/jenkins/minikube-integration/19875-6942/.minikube/ca.pem, removing ...
	I1028 05:10:13.932759    9481 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19875-6942/.minikube/ca.pem
	I1028 05:10:13.932854    9481 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19875-6942/.minikube/ca.pem (1082 bytes)
	I1028 05:10:13.933040    9481 exec_runner.go:144] found /Users/jenkins/minikube-integration/19875-6942/.minikube/cert.pem, removing ...
	I1028 05:10:13.933044    9481 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19875-6942/.minikube/cert.pem
	I1028 05:10:13.933088    9481 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19875-6942/.minikube/cert.pem (1123 bytes)
	I1028 05:10:13.933208    9481 exec_runner.go:144] found /Users/jenkins/minikube-integration/19875-6942/.minikube/key.pem, removing ...
	I1028 05:10:13.933212    9481 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19875-6942/.minikube/key.pem
	I1028 05:10:13.933258    9481 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19875-6942/.minikube/key.pem (1675 bytes)
	I1028 05:10:13.933353    9481 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-451000 san=[127.0.0.1 localhost minikube stopped-upgrade-451000]
	I1028 05:10:14.001064    9481 provision.go:177] copyRemoteCerts
	I1028 05:10:14.001110    9481 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 05:10:14.001118    9481 sshutil.go:53] new ssh client: &{IP:localhost Port:58218 SSHKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/stopped-upgrade-451000/id_rsa Username:docker}
	I1028 05:10:14.033554    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 05:10:14.040295    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1028 05:10:14.047699    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 05:10:14.054715    9481 provision.go:87] duration metric: took 122.033084ms to configureAuth
	I1028 05:10:14.054724    9481 buildroot.go:189] setting minikube options for container-runtime
	I1028 05:10:14.054837    9481 config.go:182] Loaded profile config "stopped-upgrade-451000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 05:10:14.054887    9481 main.go:141] libmachine: Using SSH client type: native
	I1028 05:10:14.054979    9481 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030165f0] 0x103018e30 <nil>  [] 0s} localhost 58218 <nil> <nil>}
	I1028 05:10:14.054983    9481 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1028 05:10:14.110216    9481 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1028 05:10:14.110224    9481 buildroot.go:70] root file system type: tmpfs
	I1028 05:10:14.110274    9481 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1028 05:10:14.110331    9481 main.go:141] libmachine: Using SSH client type: native
	I1028 05:10:14.110443    9481 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030165f0] 0x103018e30 <nil>  [] 0s} localhost 58218 <nil> <nil>}
	I1028 05:10:14.110476    9481 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1028 05:10:14.172701    9481 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1028 05:10:14.172771    9481 main.go:141] libmachine: Using SSH client type: native
	I1028 05:10:14.172884    9481 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030165f0] 0x103018e30 <nil>  [] 0s} localhost 58218 <nil> <nil>}
	I1028 05:10:14.172892    9481 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1028 05:10:14.534475    9481 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1028 05:10:14.534491    9481 machine.go:96] duration metric: took 834.508625ms to provisionDockerMachine
	I1028 05:10:14.534499    9481 start.go:293] postStartSetup for "stopped-upgrade-451000" (driver="qemu2")
	I1028 05:10:14.534506    9481 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 05:10:14.534585    9481 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 05:10:14.534597    9481 sshutil.go:53] new ssh client: &{IP:localhost Port:58218 SSHKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/stopped-upgrade-451000/id_rsa Username:docker}
	I1028 05:10:14.567578    9481 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 05:10:14.568834    9481 info.go:137] Remote host: Buildroot 2021.02.12
	I1028 05:10:14.568841    9481 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19875-6942/.minikube/addons for local assets ...
	I1028 05:10:14.568911    9481 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19875-6942/.minikube/files for local assets ...
	I1028 05:10:14.568997    9481 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19875-6942/.minikube/files/etc/ssl/certs/74522.pem -> 74522.pem in /etc/ssl/certs
	I1028 05:10:14.569103    9481 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 05:10:14.572155    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/files/etc/ssl/certs/74522.pem --> /etc/ssl/certs/74522.pem (1708 bytes)
	I1028 05:10:14.579561    9481 start.go:296] duration metric: took 45.057917ms for postStartSetup
	I1028 05:10:14.579577    9481 fix.go:56] duration metric: took 20.843831084s for fixHost
	I1028 05:10:14.579626    9481 main.go:141] libmachine: Using SSH client type: native
	I1028 05:10:14.579728    9481 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030165f0] 0x103018e30 <nil>  [] 0s} localhost 58218 <nil> <nil>}
	I1028 05:10:14.579732    9481 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 05:10:14.634636    9481 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117415.009479629
	
	I1028 05:10:14.634643    9481 fix.go:216] guest clock: 1730117415.009479629
	I1028 05:10:14.634647    9481 fix.go:229] Guest: 2024-10-28 05:10:15.009479629 -0700 PDT Remote: 2024-10-28 05:10:14.579579 -0700 PDT m=+20.952351793 (delta=429.900629ms)
	I1028 05:10:14.634658    9481 fix.go:200] guest clock delta is within tolerance: 429.900629ms
	I1028 05:10:14.634660    9481 start.go:83] releasing machines lock for "stopped-upgrade-451000", held for 20.898923292s
	I1028 05:10:14.634740    9481 ssh_runner.go:195] Run: cat /version.json
	I1028 05:10:14.634749    9481 sshutil.go:53] new ssh client: &{IP:localhost Port:58218 SSHKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/stopped-upgrade-451000/id_rsa Username:docker}
	I1028 05:10:14.634740    9481 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 05:10:14.634780    9481 sshutil.go:53] new ssh client: &{IP:localhost Port:58218 SSHKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/stopped-upgrade-451000/id_rsa Username:docker}
	W1028 05:10:14.635305    9481 sshutil.go:64] dial failure (will retry): dial tcp [::1]:58218: connect: connection refused
	I1028 05:10:14.635322    9481 retry.go:31] will retry after 352.776313ms: dial tcp [::1]:58218: connect: connection refused
	W1028 05:10:14.665278    9481 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1028 05:10:14.665325    9481 ssh_runner.go:195] Run: systemctl --version
	I1028 05:10:14.667147    9481 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 05:10:14.668748    9481 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 05:10:14.668782    9481 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1028 05:10:14.671462    9481 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1028 05:10:14.676558    9481 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 05:10:14.676566    9481 start.go:495] detecting cgroup driver to use...
	I1028 05:10:14.676654    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 05:10:14.683490    9481 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1028 05:10:14.686831    9481 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1028 05:10:14.689684    9481 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1028 05:10:14.689714    9481 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1028 05:10:14.692634    9481 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 05:10:14.696242    9481 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1028 05:10:14.699771    9481 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 05:10:14.703265    9481 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 05:10:14.706534    9481 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1028 05:10:14.709377    9481 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1028 05:10:14.712419    9481 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1028 05:10:14.715928    9481 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 05:10:14.719184    9481 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 05:10:14.721968    9481 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 05:10:14.809766    9481 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1028 05:10:14.816882    9481 start.go:495] detecting cgroup driver to use...
	I1028 05:10:14.816994    9481 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1028 05:10:14.829412    9481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 05:10:14.835498    9481 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 05:10:14.846137    9481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 05:10:14.851152    9481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 05:10:14.855904    9481 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1028 05:10:14.894820    9481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 05:10:14.899563    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 05:10:14.905224    9481 ssh_runner.go:195] Run: which cri-dockerd
	I1028 05:10:14.906515    9481 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1028 05:10:14.909129    9481 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1028 05:10:14.914202    9481 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1028 05:10:14.991736    9481 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1028 05:10:15.076674    9481 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1028 05:10:15.076743    9481 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1028 05:10:15.081803    9481 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 05:10:15.164594    9481 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 05:10:16.301507    9481 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.136921084s)
	I1028 05:10:16.301601    9481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1028 05:10:16.306135    9481 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1028 05:10:16.312378    9481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 05:10:16.317617    9481 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1028 05:10:16.388294    9481 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1028 05:10:16.476327    9481 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 05:10:16.555544    9481 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1028 05:10:16.561516    9481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 05:10:16.566026    9481 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 05:10:16.628481    9481 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1028 05:10:16.665897    9481 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1028 05:10:16.665997    9481 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1028 05:10:16.667901    9481 start.go:563] Will wait 60s for crictl version
	I1028 05:10:16.667958    9481 ssh_runner.go:195] Run: which crictl
	I1028 05:10:16.669735    9481 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 05:10:16.685128    9481 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1028 05:10:16.685208    9481 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 05:10:16.702639    9481 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 05:10:16.721992    9481 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1028 05:10:16.722140    9481 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1028 05:10:16.723501    9481 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 05:10:16.727301    9481 kubeadm.go:883] updating cluster {Name:stopped-upgrade-451000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:58252 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-451000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1028 05:10:16.727352    9481 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1028 05:10:16.727399    9481 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 05:10:16.737895    9481 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1028 05:10:16.737911    9481 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1028 05:10:16.737973    9481 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1028 05:10:16.740914    9481 ssh_runner.go:195] Run: which lz4
	I1028 05:10:16.742323    9481 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 05:10:16.743511    9481 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 05:10:16.743521    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1028 05:10:17.676318    9481 docker.go:653] duration metric: took 934.076292ms to copy over tarball
	I1028 05:10:17.676406    9481 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 05:10:16.826264    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:16.826363    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:10:16.839089    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:10:16.839170    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:10:16.850785    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:10:16.850873    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:10:16.863309    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:10:16.863397    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:10:16.875384    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:10:16.875467    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:10:16.887475    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:10:16.887555    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:10:16.903382    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:10:16.903475    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:10:16.915219    9360 logs.go:282] 0 containers: []
	W1028 05:10:16.915232    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:10:16.915303    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:10:16.927094    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:10:16.927115    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:10:16.927120    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:10:16.932277    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:10:16.932289    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:10:16.973376    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:10:16.973392    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:10:16.990367    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:10:16.990380    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:10:17.008239    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:10:17.008253    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:10:17.027338    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:10:17.027350    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:10:17.043694    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:10:17.043706    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:10:17.068632    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:10:17.068647    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:10:17.094401    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:10:17.094413    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:10:17.107737    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:10:17.107749    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:10:17.122373    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:10:17.122386    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:10:17.143336    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:10:17.143353    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:10:17.162453    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:10:17.162464    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:10:17.176726    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:10:17.176738    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:10:17.189536    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:10:17.189548    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:10:17.229819    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:10:17.229836    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:10:17.242699    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:10:17.242711    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:10:18.865057    9481 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.188658417s)
	I1028 05:10:18.865072    9481 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 05:10:18.881379    9481 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1028 05:10:18.884582    9481 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1028 05:10:18.889859    9481 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 05:10:18.952373    9481 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 05:10:20.504533    9481 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.552177041s)
	I1028 05:10:20.504654    9481 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 05:10:20.517489    9481 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1028 05:10:20.517506    9481 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1028 05:10:20.517513    9481 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 05:10:20.521548    9481 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 05:10:20.523394    9481 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 05:10:20.525810    9481 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1028 05:10:20.525929    9481 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 05:10:20.528031    9481 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 05:10:20.528137    9481 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1028 05:10:20.529951    9481 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1028 05:10:20.529969    9481 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1028 05:10:20.531254    9481 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1028 05:10:20.531471    9481 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1028 05:10:20.532601    9481 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 05:10:20.532951    9481 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1028 05:10:20.533921    9481 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1028 05:10:20.534065    9481 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1028 05:10:20.534835    9481 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 05:10:20.535774    9481 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1028 05:10:21.079312    9481 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 05:10:21.090395    9481 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1028 05:10:21.090426    9481 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 05:10:21.090473    9481 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 05:10:21.098467    9481 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1028 05:10:21.103217    9481 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1028 05:10:21.110872    9481 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1028 05:10:21.110893    9481 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1028 05:10:21.110960    9481 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1028 05:10:21.121445    9481 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1028 05:10:21.124612    9481 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1028 05:10:21.136085    9481 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1028 05:10:21.136111    9481 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1028 05:10:21.136161    9481 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1028 05:10:21.147711    9481 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1028 05:10:21.209551    9481 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1028 05:10:21.220036    9481 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1028 05:10:21.220060    9481 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1028 05:10:21.220119    9481 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1028 05:10:21.225001    9481 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1028 05:10:21.231165    9481 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1028 05:10:21.239705    9481 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1028 05:10:21.239725    9481 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1028 05:10:21.239792    9481 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1028 05:10:21.250015    9481 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W1028 05:10:21.309077    9481 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1028 05:10:21.309224    9481 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1028 05:10:21.321306    9481 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1028 05:10:21.321327    9481 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 05:10:21.321397    9481 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1028 05:10:21.331607    9481 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1028 05:10:21.332433    9481 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1028 05:10:21.334061    9481 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1028 05:10:21.334077    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1028 05:10:21.357344    9481 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1028 05:10:21.376868    9481 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1028 05:10:21.376883    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1028 05:10:21.377564    9481 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1028 05:10:21.377582    9481 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1028 05:10:21.377652    9481 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	W1028 05:10:21.417495    9481 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1028 05:10:21.417615    9481 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 05:10:21.429002    9481 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1028 05:10:21.429053    9481 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1028 05:10:21.429183    9481 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1028 05:10:21.431087    9481 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1028 05:10:21.431103    9481 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 05:10:21.431151    9481 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 05:10:21.431589    9481 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1028 05:10:21.431603    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1028 05:10:21.446278    9481 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1028 05:10:21.446421    9481 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1028 05:10:21.448336    9481 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1028 05:10:21.448352    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1028 05:10:21.450253    9481 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1028 05:10:21.450262    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1028 05:10:21.498883    9481 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1028 05:10:21.498914    9481 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1028 05:10:21.498923    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1028 05:10:21.740257    9481 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1028 05:10:21.740296    9481 cache_images.go:92] duration metric: took 1.222795s to LoadCachedImages
	W1028 05:10:21.740339    9481 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I1028 05:10:21.740345    9481 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1028 05:10:21.740404    9481 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-451000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-451000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 05:10:21.740470    9481 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1028 05:10:21.758369    9481 cni.go:84] Creating CNI manager for ""
	I1028 05:10:21.758380    9481 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 05:10:21.758390    9481 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 05:10:21.758399    9481 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-451000 NodeName:stopped-upgrade-451000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 05:10:21.758477    9481 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-451000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 05:10:21.758544    9481 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1028 05:10:21.761677    9481 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 05:10:21.761717    9481 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 05:10:21.764310    9481 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1028 05:10:21.769522    9481 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 05:10:21.774297    9481 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1028 05:10:21.780007    9481 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1028 05:10:21.781412    9481 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 05:10:21.784870    9481 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 05:10:21.863701    9481 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 05:10:21.869345    9481 certs.go:68] Setting up /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000 for IP: 10.0.2.15
	I1028 05:10:21.869355    9481 certs.go:194] generating shared ca certs ...
	I1028 05:10:21.869364    9481 certs.go:226] acquiring lock for ca certs: {Name:mk596dd32716491232c9389abcfad3254ffdbfdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:10:21.869546    9481 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/ca.key
	I1028 05:10:21.869587    9481 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/proxy-client-ca.key
	I1028 05:10:21.869594    9481 certs.go:256] generating profile certs ...
	I1028 05:10:21.869658    9481 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/client.key
	I1028 05:10:21.869681    9481 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/apiserver.key.1585e0f0
	I1028 05:10:21.869692    9481 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/apiserver.crt.1585e0f0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1028 05:10:21.969010    9481 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/apiserver.crt.1585e0f0 ...
	I1028 05:10:21.969026    9481 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/apiserver.crt.1585e0f0: {Name:mkf639cf273112e125f85c493bba6c636444a0d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:10:21.969371    9481 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/apiserver.key.1585e0f0 ...
	I1028 05:10:21.969376    9481 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/apiserver.key.1585e0f0: {Name:mkcb4bec4b86434e343725edfa795749cf16a56f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:10:21.970100    9481 certs.go:381] copying /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/apiserver.crt.1585e0f0 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/apiserver.crt
	I1028 05:10:21.970253    9481 certs.go:385] copying /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/apiserver.key.1585e0f0 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/apiserver.key
	I1028 05:10:21.970396    9481 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/proxy-client.key
	I1028 05:10:21.970540    9481 certs.go:484] found cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/7452.pem (1338 bytes)
	W1028 05:10:21.970565    9481 certs.go:480] ignoring /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/7452_empty.pem, impossibly tiny 0 bytes
	I1028 05:10:21.970571    9481 certs.go:484] found cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 05:10:21.970591    9481 certs.go:484] found cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem (1082 bytes)
	I1028 05:10:21.970611    9481 certs.go:484] found cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem (1123 bytes)
	I1028 05:10:21.970628    9481 certs.go:484] found cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/key.pem (1675 bytes)
	I1028 05:10:21.970666    9481 certs.go:484] found cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/files/etc/ssl/certs/74522.pem (1708 bytes)
	I1028 05:10:21.971035    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 05:10:21.978547    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 05:10:21.985774    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 05:10:21.992615    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 05:10:21.999605    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 05:10:22.006971    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 05:10:22.013987    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 05:10:22.020591    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 05:10:22.027967    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	I1028 05:10:22.035219    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/files/etc/ssl/certs/74522.pem --> /usr/share/ca-certificates/74522.pem (1708 bytes)
	I1028 05:10:22.041884    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 05:10:22.048599    9481 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 05:10:22.053741    9481 ssh_runner.go:195] Run: openssl version
	I1028 05:10:22.055608    9481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/74522.pem && ln -fs /usr/share/ca-certificates/74522.pem /etc/ssl/certs/74522.pem"
	I1028 05:10:22.059328    9481 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/74522.pem
	I1028 05:10:22.060891    9481 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:54 /usr/share/ca-certificates/74522.pem
	I1028 05:10:22.060920    9481 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/74522.pem
	I1028 05:10:22.062895    9481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/74522.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 05:10:22.065803    9481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 05:10:22.068706    9481 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 05:10:22.070205    9481 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 12:06 /usr/share/ca-certificates/minikubeCA.pem
	I1028 05:10:22.070231    9481 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 05:10:22.071864    9481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 05:10:22.074986    9481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	I1028 05:10:22.077776    9481 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7452.pem
	I1028 05:10:22.079116    9481 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:54 /usr/share/ca-certificates/7452.pem
	I1028 05:10:22.079143    9481 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	I1028 05:10:22.080855    9481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	I1028 05:10:22.084186    9481 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 05:10:22.085691    9481 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 05:10:22.087695    9481 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 05:10:22.089799    9481 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 05:10:22.091615    9481 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 05:10:22.093319    9481 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 05:10:22.095027    9481 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 05:10:22.096812    9481 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-451000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:58252 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-451000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1028 05:10:22.096883    9481 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1028 05:10:22.106533    9481 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 05:10:22.109615    9481 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 05:10:22.109625    9481 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 05:10:22.109660    9481 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 05:10:22.112390    9481 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 05:10:22.112691    9481 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-451000" does not appear in /Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:10:22.112791    9481 kubeconfig.go:62] /Users/jenkins/minikube-integration/19875-6942/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-451000" cluster setting kubeconfig missing "stopped-upgrade-451000" context setting]
	I1028 05:10:22.112993    9481 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/kubeconfig: {Name:mk90a124f6c448e81120cf90ba82d6374e9cd851 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:10:22.113453    9481 kapi.go:59] client config for stopped-upgrade-451000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/client.key", CAFile:"/Users/jenkins/minikube-integration/19875-6942/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104a72680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 05:10:22.113827    9481 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 05:10:22.116456    9481 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-451000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1028 05:10:22.116461    9481 kubeadm.go:1160] stopping kube-system containers ...
	I1028 05:10:22.116510    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1028 05:10:22.128026    9481 docker.go:483] Stopping containers: [9954ffaa9f68 d14d16734881 84467d88e691 fc096b12f559 1798e6b77be3 47e6cfc87e4e be4344508268 f02184c9956d]
	I1028 05:10:22.128096    9481 ssh_runner.go:195] Run: docker stop 9954ffaa9f68 d14d16734881 84467d88e691 fc096b12f559 1798e6b77be3 47e6cfc87e4e be4344508268 f02184c9956d
	I1028 05:10:22.138884    9481 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 05:10:22.144574    9481 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 05:10:22.147936    9481 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 05:10:22.147942    9481 kubeadm.go:157] found existing configuration files:
	
	I1028 05:10:22.147976    9481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/admin.conf
	I1028 05:10:22.151122    9481 kubeadm.go:163] "https://control-plane.minikube.internal:58252" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 05:10:22.151150    9481 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 05:10:22.153669    9481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/kubelet.conf
	I1028 05:10:22.156223    9481 kubeadm.go:163] "https://control-plane.minikube.internal:58252" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 05:10:22.156246    9481 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 05:10:22.159184    9481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/controller-manager.conf
	I1028 05:10:22.161831    9481 kubeadm.go:163] "https://control-plane.minikube.internal:58252" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 05:10:22.161854    9481 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 05:10:22.164558    9481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/scheduler.conf
	I1028 05:10:22.167521    9481 kubeadm.go:163] "https://control-plane.minikube.internal:58252" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 05:10:22.167549    9481 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 05:10:22.170341    9481 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 05:10:22.172944    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 05:10:22.195553    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 05:10:22.571628    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 05:10:22.702429    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 05:10:22.736020    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 05:10:22.768284    9481 api_server.go:52] waiting for apiserver process to appear ...
	I1028 05:10:22.768388    9481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 05:10:23.270417    9481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 05:10:19.758232    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:23.770423    9481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 05:10:23.774649    9481 api_server.go:72] duration metric: took 1.006386125s to wait for apiserver process to appear ...
	I1028 05:10:23.774660    9481 api_server.go:88] waiting for apiserver healthz status ...
	I1028 05:10:23.774676    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:24.760837    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:24.761383    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:10:24.804866    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:10:24.805027    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:10:24.824219    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:10:24.824320    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:10:24.838773    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:10:24.838863    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:10:24.850572    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:10:24.850663    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:10:24.862418    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:10:24.862516    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:10:24.874216    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:10:24.874295    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:10:24.885217    9360 logs.go:282] 0 containers: []
	W1028 05:10:24.885230    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:10:24.885295    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:10:24.898440    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:10:24.898459    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:10:24.898464    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:10:24.913213    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:10:24.913222    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:10:24.930658    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:10:24.930670    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:10:24.949877    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:10:24.949886    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:10:24.962074    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:10:24.962086    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:10:24.979922    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:10:24.979933    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:10:25.014385    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:10:25.014398    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:10:25.025826    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:10:25.025837    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:10:25.040985    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:10:25.040996    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:10:25.056395    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:10:25.056406    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:10:25.072888    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:10:25.072898    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:10:25.090723    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:10:25.090733    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:10:25.105684    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:10:25.105696    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:10:25.110564    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:10:25.110572    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:10:25.132820    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:10:25.132830    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:10:25.149224    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:10:25.149237    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:10:25.161333    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:10:25.161344    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:10:27.699306    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:28.776629    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:28.776675    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:32.701529    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:32.701835    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:10:32.726458    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:10:32.726576    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:10:32.742685    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:10:32.742782    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:10:32.755910    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:10:32.755991    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:10:32.766950    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:10:32.767023    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:10:32.777787    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:10:32.777863    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:10:32.788145    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:10:32.788221    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:10:32.798489    9360 logs.go:282] 0 containers: []
	W1028 05:10:32.798508    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:10:32.798583    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:10:32.814530    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:10:32.814551    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:10:32.814558    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:10:32.819941    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:10:32.819955    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:10:32.860155    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:10:32.860166    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:10:32.873958    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:10:32.873969    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:10:32.889411    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:10:32.889422    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:10:32.900677    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:10:32.900692    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:10:32.918960    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:10:32.918970    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:10:32.934411    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:10:32.934422    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:10:32.956923    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:10:32.956933    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:10:32.992575    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:10:32.992585    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:10:33.006987    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:10:33.006998    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:10:33.018524    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:10:33.018535    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:10:33.037469    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:10:33.037480    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:10:33.055358    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:10:33.055370    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:10:33.066752    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:10:33.066763    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:10:33.078767    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:10:33.078781    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:10:33.090382    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:10:33.090395    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:10:33.776857    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:33.776897    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:35.604437    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:38.777224    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:38.777272    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:40.606527    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:40.606642    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:10:40.621803    9360 logs.go:282] 2 containers: [bdc470a6e115 add71d73a2be]
	I1028 05:10:40.621886    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:10:40.634025    9360 logs.go:282] 2 containers: [fa47927e5016 541001035c33]
	I1028 05:10:40.634134    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:10:40.645273    9360 logs.go:282] 1 containers: [72a81bd7e520]
	I1028 05:10:40.645347    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:10:40.656922    9360 logs.go:282] 2 containers: [21af65b1e3e6 0589805c2cad]
	I1028 05:10:40.657005    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:10:40.667510    9360 logs.go:282] 1 containers: [9bd8799955c1]
	I1028 05:10:40.667591    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:10:40.678470    9360 logs.go:282] 2 containers: [2f602e2a9589 f66c957c1d88]
	I1028 05:10:40.678555    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:10:40.689496    9360 logs.go:282] 0 containers: []
	W1028 05:10:40.689506    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:10:40.689581    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:10:40.701381    9360 logs.go:282] 2 containers: [e31e89a6d119 c2e33e18c935]
	I1028 05:10:40.701396    9360 logs.go:123] Gathering logs for etcd [541001035c33] ...
	I1028 05:10:40.701401    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 541001035c33"
	I1028 05:10:40.718463    9360 logs.go:123] Gathering logs for kube-controller-manager [f66c957c1d88] ...
	I1028 05:10:40.718477    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f66c957c1d88"
	I1028 05:10:40.731206    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:10:40.731217    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:10:40.769773    9360 logs.go:123] Gathering logs for kube-apiserver [add71d73a2be] ...
	I1028 05:10:40.769783    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add71d73a2be"
	I1028 05:10:40.791323    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:10:40.791337    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:10:40.814234    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:10:40.814250    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:10:40.818795    9360 logs.go:123] Gathering logs for storage-provisioner [e31e89a6d119] ...
	I1028 05:10:40.818803    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31e89a6d119"
	I1028 05:10:40.831068    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:10:40.831078    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:10:40.843091    9360 logs.go:123] Gathering logs for kube-apiserver [bdc470a6e115] ...
	I1028 05:10:40.843102    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc470a6e115"
	I1028 05:10:40.857085    9360 logs.go:123] Gathering logs for kube-proxy [9bd8799955c1] ...
	I1028 05:10:40.857098    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bd8799955c1"
	I1028 05:10:40.869157    9360 logs.go:123] Gathering logs for coredns [72a81bd7e520] ...
	I1028 05:10:40.869167    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72a81bd7e520"
	I1028 05:10:40.880245    9360 logs.go:123] Gathering logs for kube-scheduler [21af65b1e3e6] ...
	I1028 05:10:40.880256    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21af65b1e3e6"
	I1028 05:10:40.891778    9360 logs.go:123] Gathering logs for kube-scheduler [0589805c2cad] ...
	I1028 05:10:40.891793    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0589805c2cad"
	I1028 05:10:40.909708    9360 logs.go:123] Gathering logs for kube-controller-manager [2f602e2a9589] ...
	I1028 05:10:40.909718    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f602e2a9589"
	I1028 05:10:40.927439    9360 logs.go:123] Gathering logs for storage-provisioner [c2e33e18c935] ...
	I1028 05:10:40.927450    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e33e18c935"
	I1028 05:10:40.939363    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:10:40.939375    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:10:40.974747    9360 logs.go:123] Gathering logs for etcd [fa47927e5016] ...
	I1028 05:10:40.974760    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa47927e5016"
	I1028 05:10:43.492230    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:43.777922    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:43.778013    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:48.494617    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:48.494860    9360 kubeadm.go:597] duration metric: took 4m4.131999458s to restartPrimaryControlPlane
	W1028 05:10:48.495050    9360 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 05:10:48.495115    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1028 05:10:49.513490    9360 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.018382167s)
	I1028 05:10:49.513577    9360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 05:10:49.518544    9360 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 05:10:49.521493    9360 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 05:10:49.524208    9360 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 05:10:49.524215    9360 kubeadm.go:157] found existing configuration files:
	
	I1028 05:10:49.524244    9360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58030 /etc/kubernetes/admin.conf
	I1028 05:10:49.526680    9360 kubeadm.go:163] "https://control-plane.minikube.internal:58030" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:58030 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 05:10:49.526708    9360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 05:10:49.529691    9360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58030 /etc/kubernetes/kubelet.conf
	I1028 05:10:49.532309    9360 kubeadm.go:163] "https://control-plane.minikube.internal:58030" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:58030 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 05:10:49.532336    9360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 05:10:49.534902    9360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58030 /etc/kubernetes/controller-manager.conf
	I1028 05:10:49.537935    9360 kubeadm.go:163] "https://control-plane.minikube.internal:58030" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:58030 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 05:10:49.537967    9360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 05:10:49.540660    9360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58030 /etc/kubernetes/scheduler.conf
	I1028 05:10:49.543105    9360 kubeadm.go:163] "https://control-plane.minikube.internal:58030" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:58030 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 05:10:49.543129    9360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 05:10:49.546094    9360 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 05:10:49.563231    9360 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1028 05:10:49.563327    9360 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 05:10:49.619480    9360 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 05:10:49.619542    9360 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 05:10:49.619676    9360 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 05:10:49.668674    9360 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 05:10:49.673714    9360 out.go:235]   - Generating certificates and keys ...
	I1028 05:10:49.673747    9360 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 05:10:49.673774    9360 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 05:10:49.673813    9360 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 05:10:49.673840    9360 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 05:10:49.673909    9360 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 05:10:49.673937    9360 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 05:10:49.674042    9360 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 05:10:49.674078    9360 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 05:10:49.674120    9360 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 05:10:49.674162    9360 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 05:10:49.674184    9360 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 05:10:49.674211    9360 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 05:10:49.820116    9360 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 05:10:49.904835    9360 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 05:10:50.056021    9360 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 05:10:50.135922    9360 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 05:10:50.164370    9360 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 05:10:50.165482    9360 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 05:10:50.165505    9360 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 05:10:50.258018    9360 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 05:10:48.779077    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:48.779102    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:50.261414    9360 out.go:235]   - Booting up control plane ...
	I1028 05:10:50.261460    9360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 05:10:50.261495    9360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 05:10:50.261525    9360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 05:10:50.261562    9360 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 05:10:50.261629    9360 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 05:10:54.756230    9360 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501966 seconds
	I1028 05:10:54.756308    9360 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 05:10:54.760027    9360 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 05:10:55.285778    9360 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 05:10:55.286253    9360 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-581000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 05:10:55.789148    9360 kubeadm.go:310] [bootstrap-token] Using token: oagvam.5bwhpiu7oekjyo1h
	I1028 05:10:55.795437    9360 out.go:235]   - Configuring RBAC rules ...
	I1028 05:10:55.795485    9360 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 05:10:55.795519    9360 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 05:10:55.801938    9360 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 05:10:55.802843    9360 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 05:10:55.803606    9360 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 05:10:55.804616    9360 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 05:10:55.807957    9360 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 05:10:56.003112    9360 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 05:10:56.192981    9360 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 05:10:56.193370    9360 kubeadm.go:310] 
	I1028 05:10:56.193398    9360 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 05:10:56.193403    9360 kubeadm.go:310] 
	I1028 05:10:56.193444    9360 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 05:10:56.193448    9360 kubeadm.go:310] 
	I1028 05:10:56.193459    9360 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 05:10:56.193485    9360 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 05:10:56.193514    9360 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 05:10:56.193563    9360 kubeadm.go:310] 
	I1028 05:10:56.193633    9360 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 05:10:56.193646    9360 kubeadm.go:310] 
	I1028 05:10:56.193694    9360 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 05:10:56.193697    9360 kubeadm.go:310] 
	I1028 05:10:56.193723    9360 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 05:10:56.193755    9360 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 05:10:56.193798    9360 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 05:10:56.193801    9360 kubeadm.go:310] 
	I1028 05:10:56.193840    9360 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 05:10:56.193900    9360 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 05:10:56.193905    9360 kubeadm.go:310] 
	I1028 05:10:56.193983    9360 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token oagvam.5bwhpiu7oekjyo1h \
	I1028 05:10:56.194063    9360 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f88359f6e177cdd7e99d84e4e5a9c564acd9fd4ce77f443c1fef5fb70f89e325 \
	I1028 05:10:56.194091    9360 kubeadm.go:310] 	--control-plane 
	I1028 05:10:56.194096    9360 kubeadm.go:310] 
	I1028 05:10:56.194135    9360 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 05:10:56.194137    9360 kubeadm.go:310] 
	I1028 05:10:56.194174    9360 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token oagvam.5bwhpiu7oekjyo1h \
	I1028 05:10:56.194259    9360 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f88359f6e177cdd7e99d84e4e5a9c564acd9fd4ce77f443c1fef5fb70f89e325 
	I1028 05:10:56.194323    9360 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 05:10:56.194335    9360 cni.go:84] Creating CNI manager for ""
	I1028 05:10:56.194346    9360 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 05:10:56.197218    9360 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 05:10:56.205226    9360 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 05:10:56.208663    9360 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 05:10:56.214425    9360 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 05:10:56.214500    9360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 05:10:56.214503    9360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-581000 minikube.k8s.io/updated_at=2024_10_28T05_10_56_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f minikube.k8s.io/name=running-upgrade-581000 minikube.k8s.io/primary=true
	I1028 05:10:56.255279    9360 ops.go:34] apiserver oom_adj: -16
	I1028 05:10:56.255274    9360 kubeadm.go:1113] duration metric: took 40.83775ms to wait for elevateKubeSystemPrivileges
	I1028 05:10:56.255359    9360 kubeadm.go:394] duration metric: took 4m11.906103292s to StartCluster
	I1028 05:10:56.255371    9360 settings.go:142] acquiring lock: {Name:mka2e81574940ea53fced239aa2ef4cd7479a0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:10:56.255565    9360 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:10:56.255994    9360 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/kubeconfig: {Name:mk90a124f6c448e81120cf90ba82d6374e9cd851 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:10:56.256231    9360 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:10:56.256254    9360 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 05:10:56.256297    9360 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-581000"
	I1028 05:10:56.256309    9360 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-581000"
	W1028 05:10:56.256312    9360 addons.go:243] addon storage-provisioner should already be in state true
	I1028 05:10:56.256327    9360 host.go:66] Checking if "running-upgrade-581000" exists ...
	I1028 05:10:56.256345    9360 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-581000"
	I1028 05:10:56.256355    9360 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-581000"
	I1028 05:10:56.256418    9360 config.go:182] Loaded profile config "running-upgrade-581000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 05:10:56.257325    9360 kapi.go:59] client config for running-upgrade-581000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/running-upgrade-581000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/running-upgrade-581000/client.key", CAFile:"/Users/jenkins/minikube-integration/19875-6942/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104b56680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 05:10:56.257948    9360 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-581000"
	W1028 05:10:56.257953    9360 addons.go:243] addon default-storageclass should already be in state true
	I1028 05:10:56.257960    9360 host.go:66] Checking if "running-upgrade-581000" exists ...
	I1028 05:10:56.260226    9360 out.go:177] * Verifying Kubernetes components...
	I1028 05:10:56.260605    9360 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 05:10:56.266365    9360 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 05:10:56.266374    9360 sshutil.go:53] new ssh client: &{IP:localhost Port:57998 SSHKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/running-upgrade-581000/id_rsa Username:docker}
	I1028 05:10:56.270156    9360 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 05:10:53.779919    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:53.779962    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:56.274230    9360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 05:10:56.277166    9360 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 05:10:56.277172    9360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 05:10:56.277177    9360 sshutil.go:53] new ssh client: &{IP:localhost Port:57998 SSHKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/running-upgrade-581000/id_rsa Username:docker}
	I1028 05:10:56.355049    9360 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 05:10:56.360827    9360 api_server.go:52] waiting for apiserver process to appear ...
	I1028 05:10:56.360893    9360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 05:10:56.365288    9360 api_server.go:72] duration metric: took 109.047ms to wait for apiserver process to appear ...
	I1028 05:10:56.365296    9360 api_server.go:88] waiting for apiserver healthz status ...
	I1028 05:10:56.365303    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:56.406601    9360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 05:10:56.433960    9360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 05:10:56.770943    9360 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 05:10:56.770957    9360 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 05:10:58.781129    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:58.781164    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:01.367323    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:01.367382    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:03.782619    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:03.782649    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:06.367639    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:06.367681    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:08.784533    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:08.784567    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:11.368057    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:11.368079    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:13.785876    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:13.785909    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:16.368468    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:16.368541    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:18.787365    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:18.787466    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:21.369151    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:21.369189    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:26.369959    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:26.370012    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1028 05:11:26.772677    9360 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1028 05:11:26.776945    9360 out.go:177] * Enabled addons: storage-provisioner
	I1028 05:11:23.789892    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:23.790072    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:11:23.806022    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:11:23.806124    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:11:23.819350    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:11:23.819425    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:11:23.830383    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:11:23.830475    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:11:23.841018    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:11:23.841089    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:11:23.851285    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:11:23.851358    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:11:23.868649    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:11:23.868732    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:11:23.879005    9481 logs.go:282] 0 containers: []
	W1028 05:11:23.879015    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:11:23.879080    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:11:23.899768    9481 logs.go:282] 0 containers: []
	W1028 05:11:23.899782    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:11:23.899790    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:11:23.899795    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:11:23.922245    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:11:23.922256    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:11:23.926311    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:11:23.926320    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:11:23.952570    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:11:23.952584    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:11:23.965559    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:11:23.965571    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:11:23.977659    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:11:23.977670    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:11:23.995670    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:11:23.995683    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:11:24.017965    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:11:24.017976    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:11:24.057234    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:11:24.057241    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:11:24.071301    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:11:24.071311    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:11:24.088259    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:11:24.088272    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:11:24.104411    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:11:24.104425    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:11:24.215633    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:11:24.215647    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:11:24.229577    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:11:24.229590    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:11:24.248900    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:11:24.248910    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:11:26.775436    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:26.783771    9360 addons.go:510] duration metric: took 30.528183458s for enable addons: enabled=[storage-provisioner]
	I1028 05:11:31.775725    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:31.776109    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:11:31.808874    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:11:31.809033    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:11:31.832241    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:11:31.832349    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:11:31.845787    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:11:31.845876    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:11:31.857963    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:11:31.858057    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:11:31.869033    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:11:31.869115    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:11:31.885753    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:11:31.885835    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:11:31.896440    9481 logs.go:282] 0 containers: []
	W1028 05:11:31.896452    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:11:31.896513    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:11:31.907151    9481 logs.go:282] 0 containers: []
	W1028 05:11:31.907164    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:11:31.907185    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:11:31.907192    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:11:31.932037    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:11:31.932048    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:11:31.947699    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:11:31.947709    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:11:31.959761    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:11:31.959771    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:11:31.978648    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:11:31.978658    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:11:31.996282    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:11:31.996291    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:11:32.020579    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:11:32.020586    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:11:32.024756    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:11:32.024762    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:11:32.060015    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:11:32.060026    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:11:32.071834    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:11:32.071844    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:11:32.086017    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:11:32.086026    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:11:32.099981    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:11:32.099994    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:11:32.111937    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:11:32.111948    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:11:32.129027    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:11:32.129037    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:11:32.166419    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:11:32.166427    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:11:31.371051    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:31.371149    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:34.680411    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:36.373278    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:36.373323    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:39.682627    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:39.682822    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:11:39.698198    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:11:39.698292    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:11:39.711124    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:11:39.711200    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:11:39.721894    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:11:39.721974    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:11:39.732345    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:11:39.732420    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:11:39.742583    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:11:39.742651    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:11:39.753498    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:11:39.753578    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:11:39.763842    9481 logs.go:282] 0 containers: []
	W1028 05:11:39.763857    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:11:39.763919    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:11:39.774249    9481 logs.go:282] 0 containers: []
	W1028 05:11:39.774260    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:11:39.774267    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:11:39.774272    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:11:39.787101    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:11:39.787116    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:11:39.800013    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:11:39.800028    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:11:39.838091    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:11:39.838100    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:11:39.867839    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:11:39.867850    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:11:39.882150    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:11:39.882158    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:11:39.904279    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:11:39.904294    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:11:39.916439    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:11:39.916450    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:11:39.937603    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:11:39.937614    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:11:39.941640    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:11:39.941648    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:11:39.955663    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:11:39.955672    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:11:39.967101    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:11:39.967111    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:11:39.978571    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:11:39.978579    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:11:40.004132    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:11:40.004140    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:11:40.041103    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:11:40.041114    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:11:42.560523    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:41.375345    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:41.375396    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:47.562726    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:47.562829    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:11:47.573784    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:11:47.573865    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:11:47.584895    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:11:47.584968    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:11:47.595552    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:11:47.595639    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:11:47.606909    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:11:47.606993    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:11:47.617916    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:11:47.617988    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:11:47.629082    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:11:47.629161    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:11:47.639522    9481 logs.go:282] 0 containers: []
	W1028 05:11:47.639533    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:11:47.639593    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:11:47.650341    9481 logs.go:282] 0 containers: []
	W1028 05:11:47.650353    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:11:47.650362    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:11:47.650368    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:11:47.665651    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:11:47.665664    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:11:47.681876    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:11:47.681889    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:11:47.706755    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:11:47.706766    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:11:47.724099    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:11:47.724109    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:11:47.737340    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:11:47.737350    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:11:47.761416    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:11:47.761426    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:11:47.773231    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:11:47.773247    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:11:47.812113    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:11:47.812122    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:11:47.816206    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:11:47.816212    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:11:47.850125    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:11:47.850137    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:11:47.864002    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:11:47.864015    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:11:47.875777    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:11:47.875788    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:11:47.887287    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:11:47.887300    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:11:47.899127    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:11:47.899137    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:11:46.376580    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:46.376667    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:50.418936    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:51.377278    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:51.377363    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:55.421316    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:55.421753    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:11:55.452796    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:11:55.452940    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:11:55.471456    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:11:55.471564    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:11:55.485346    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:11:55.485435    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:11:55.496941    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:11:55.497030    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:11:55.508054    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:11:55.508133    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:11:55.518733    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:11:55.518813    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:11:55.528272    9481 logs.go:282] 0 containers: []
	W1028 05:11:55.528283    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:11:55.528344    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:11:55.538276    9481 logs.go:282] 0 containers: []
	W1028 05:11:55.538286    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:11:55.538294    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:11:55.538299    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:11:55.552995    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:11:55.553009    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:11:55.564378    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:11:55.564392    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:11:55.583400    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:11:55.583414    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:11:55.603889    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:11:55.603898    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:11:55.617756    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:11:55.617765    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:11:55.622687    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:11:55.622696    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:11:55.636758    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:11:55.636768    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:11:55.652127    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:11:55.652148    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:11:55.664636    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:11:55.664646    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:11:55.687956    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:11:55.687962    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:11:55.730528    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:11:55.730537    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:11:55.766440    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:11:55.766449    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:11:55.791610    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:11:55.791622    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:11:55.810549    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:11:55.810561    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:11:58.324697    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:56.379764    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:56.380003    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:11:56.403396    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:11:56.403486    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:11:56.417992    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:11:56.418072    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:11:56.433375    9360 logs.go:282] 2 containers: [47a579d7d206 f9e74904e5af]
	I1028 05:11:56.433469    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:11:56.456555    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:11:56.456640    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:11:56.470354    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:11:56.470432    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:11:56.481064    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:11:56.481142    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:11:56.496764    9360 logs.go:282] 0 containers: []
	W1028 05:11:56.496779    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:11:56.496840    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:11:56.507711    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:11:56.507726    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:11:56.507731    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:11:56.523925    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:11:56.523937    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:11:56.535490    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:11:56.535502    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:11:56.559029    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:11:56.559037    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:11:56.574623    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:11:56.574634    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:11:56.579321    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:11:56.579329    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:11:56.618026    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:11:56.618037    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:11:56.632970    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:11:56.632983    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:11:56.647577    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:11:56.647589    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:11:56.659572    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:11:56.659581    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:11:56.674262    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:11:56.674275    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:11:56.692160    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:11:56.692172    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:11:56.730056    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:11:56.730066    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:11:59.243663    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:03.326001    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:03.326241    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:03.348679    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:12:03.348818    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:03.365003    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:12:03.365098    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:03.386854    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:12:03.386936    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:03.400146    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:12:03.400229    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:03.410434    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:12:03.410504    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:03.421601    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:12:03.421674    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:03.431262    9481 logs.go:282] 0 containers: []
	W1028 05:12:03.431279    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:03.431350    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:03.441565    9481 logs.go:282] 0 containers: []
	W1028 05:12:03.441577    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:12:03.441584    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:03.441590    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:03.446267    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:12:03.446276    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:12:03.460580    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:12:03.460593    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:12:03.471967    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:12:03.471978    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:03.483776    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:03.483791    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:03.521947    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:12:03.521955    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:12:03.542241    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:12:03.542256    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:12:03.558808    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:12:03.558817    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:12:03.584191    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:12:03.584202    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:12:03.600971    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:12:03.600981    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:12:03.612812    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:12:03.612827    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:12:03.630436    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:12:03.630447    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:12:03.646932    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:12:03.646943    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:12:04.246026    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:04.246225    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:04.263436    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:12:04.263530    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:04.275911    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:12:04.275996    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:04.286825    9360 logs.go:282] 2 containers: [47a579d7d206 f9e74904e5af]
	I1028 05:12:04.286899    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:04.297524    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:12:04.297593    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:04.307918    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:12:04.307996    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:04.317974    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:12:04.318047    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:04.331874    9360 logs.go:282] 0 containers: []
	W1028 05:12:04.331889    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:04.331959    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:04.342121    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:12:04.342135    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:12:04.342142    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:12:04.357260    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:12:04.357271    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:12:04.368736    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:04.368746    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:04.392551    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:04.392562    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:04.427611    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:04.427618    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:04.461863    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:12:04.461874    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:12:04.478714    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:12:04.478725    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:12:04.493106    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:12:04.493115    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:12:04.504631    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:12:04.504645    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:12:04.516291    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:12:04.516306    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:12:04.528359    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:12:04.528370    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:12:04.549889    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:04.549904    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:04.555037    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:12:04.555044    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:03.659000    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:03.659011    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:03.684568    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:03.684578    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:06.226813    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:07.069754    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:11.229082    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:11.229342    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:11.246426    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:12:11.246527    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:11.259472    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:12:11.259548    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:11.270540    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:12:11.270624    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:11.281585    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:12:11.281660    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:11.292623    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:12:11.292696    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:11.303836    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:12:11.303915    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:11.313996    9481 logs.go:282] 0 containers: []
	W1028 05:12:11.314006    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:11.314064    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:11.324343    9481 logs.go:282] 0 containers: []
	W1028 05:12:11.324353    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:12:11.324360    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:11.324366    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:11.358513    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:12:11.358525    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:12:11.380576    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:12:11.380586    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:12:11.398318    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:12:11.398328    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:11.410220    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:12:11.410231    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:12:11.439785    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:12:11.439802    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:12:11.454969    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:12:11.454981    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:12:11.473393    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:12:11.473405    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:12:11.487333    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:11.487343    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:11.512106    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:11.512113    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:11.550557    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:11.550566    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:11.557032    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:12:11.557040    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:12:11.577893    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:12:11.577908    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:12:11.596843    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:12:11.596852    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:12:11.611509    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:12:11.611518    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:12:12.072416    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:12.072723    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:12.103270    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:12:12.103412    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:12.122704    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:12:12.122803    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:12.135593    9360 logs.go:282] 2 containers: [47a579d7d206 f9e74904e5af]
	I1028 05:12:12.135678    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:12.146847    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:12:12.146916    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:12.157655    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:12:12.157725    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:12.168295    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:12:12.168367    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:12.187948    9360 logs.go:282] 0 containers: []
	W1028 05:12:12.187961    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:12.188029    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:12.198184    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:12:12.198199    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:12.198205    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:12.203103    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:12:12.203111    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:12:12.214749    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:12:12.214760    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:12:12.229570    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:12:12.229580    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:12.242344    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:12.242355    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:12.276263    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:12.276272    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:12.310521    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:12:12.310531    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:12:12.324970    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:12:12.324980    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:12:12.338935    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:12:12.338947    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:12:12.351131    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:12:12.351140    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:12:12.363763    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:12:12.363773    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:12:12.381199    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:12:12.381210    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:12:12.397483    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:12.397493    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:14.125671    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:14.925283    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:19.127844    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:19.128106    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:19.153267    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:12:19.153369    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:19.168112    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:12:19.168201    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:19.180075    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:12:19.180159    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:19.191093    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:12:19.191173    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:19.201876    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:12:19.201962    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:19.212722    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:12:19.212807    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:19.223576    9481 logs.go:282] 0 containers: []
	W1028 05:12:19.223585    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:19.223645    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:19.234451    9481 logs.go:282] 0 containers: []
	W1028 05:12:19.234462    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:12:19.234470    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:12:19.234475    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:12:19.250190    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:19.250198    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:19.289047    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:12:19.289056    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:12:19.303409    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:12:19.303423    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:12:19.315424    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:19.315438    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:19.319434    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:12:19.319443    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:12:19.343961    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:19.343971    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:19.367172    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:12:19.367178    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:12:19.384094    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:12:19.384105    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:12:19.402655    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:12:19.402706    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:12:19.421939    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:12:19.421951    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:19.433822    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:12:19.433833    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:12:19.447651    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:12:19.447662    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:12:19.461407    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:12:19.461419    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:12:19.472991    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:19.473003    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:22.011320    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:19.927529    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:19.927744    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:19.943970    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:12:19.944062    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:19.957249    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:12:19.957329    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:19.968880    9360 logs.go:282] 2 containers: [47a579d7d206 f9e74904e5af]
	I1028 05:12:19.968950    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:19.979290    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:12:19.979367    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:19.989600    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:12:19.989677    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:20.000161    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:12:20.000237    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:20.015468    9360 logs.go:282] 0 containers: []
	W1028 05:12:20.015482    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:20.015544    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:20.026036    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:12:20.026050    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:20.026057    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:20.030712    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:12:20.030718    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:12:20.044962    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:12:20.044972    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:12:20.057340    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:12:20.057354    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:12:20.072630    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:12:20.072640    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:12:20.084317    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:12:20.084327    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:12:20.101148    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:12:20.101158    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:12:20.113805    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:20.113817    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:20.149914    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:12:20.149925    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:20.161431    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:20.161443    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:20.186418    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:12:20.186424    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:12:20.200536    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:12:20.200547    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:12:20.213258    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:20.213268    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:22.751898    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:27.013575    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:27.013952    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:27.039152    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:12:27.039288    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:27.056033    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:12:27.056127    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:27.069490    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:12:27.069576    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:27.081648    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:12:27.081734    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:27.091993    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:12:27.092072    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:27.102439    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:12:27.102516    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:27.113330    9481 logs.go:282] 0 containers: []
	W1028 05:12:27.113341    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:27.113406    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:27.125596    9481 logs.go:282] 0 containers: []
	W1028 05:12:27.125606    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:12:27.125615    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:12:27.125621    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:27.141163    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:12:27.141176    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:12:27.155754    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:12:27.155765    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:12:27.170615    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:12:27.170625    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:12:27.182524    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:12:27.182537    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:12:27.195909    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:27.195923    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:27.220981    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:12:27.220992    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:12:27.238902    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:27.238914    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:27.274850    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:12:27.274865    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:12:27.300538    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:12:27.300548    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:12:27.315296    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:27.315305    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:27.351674    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:27.351685    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:27.355520    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:12:27.355527    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:12:27.368788    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:12:27.368797    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:12:27.384076    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:12:27.384085    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:12:27.754053    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:27.754243    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:27.768150    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:12:27.768245    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:27.779647    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:12:27.779724    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:27.795604    9360 logs.go:282] 2 containers: [47a579d7d206 f9e74904e5af]
	I1028 05:12:27.795683    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:27.805875    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:12:27.805963    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:27.816553    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:12:27.816631    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:27.826849    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:12:27.826920    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:27.837259    9360 logs.go:282] 0 containers: []
	W1028 05:12:27.837269    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:27.837330    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:27.847842    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:12:27.847859    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:12:27.847864    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:12:27.862204    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:12:27.862214    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:12:27.874386    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:12:27.874398    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:12:27.885673    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:12:27.885684    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:12:27.900578    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:12:27.900590    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:12:27.917856    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:12:27.917866    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:12:27.930169    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:27.930178    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:27.956085    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:27.956095    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:27.992522    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:27.992535    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:27.997662    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:27.997669    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:28.039130    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:12:28.039144    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:12:28.054046    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:12:28.054056    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:12:28.064981    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:12:28.064995    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:29.898248    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:30.580309    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:34.900785    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:34.900933    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:34.915009    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:12:34.915094    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:34.925869    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:12:34.925975    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:34.936418    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:12:34.936502    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:34.946978    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:12:34.947056    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:34.957462    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:12:34.957540    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:34.968082    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:12:34.968158    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:34.978381    9481 logs.go:282] 0 containers: []
	W1028 05:12:34.978397    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:34.978468    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:34.988700    9481 logs.go:282] 0 containers: []
	W1028 05:12:34.988715    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:12:34.988723    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:12:34.988729    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:35.000949    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:35.000962    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:35.038641    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:35.038651    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:35.042878    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:12:35.042883    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:12:35.060213    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:12:35.060223    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:12:35.071994    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:35.072006    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:35.109768    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:12:35.109778    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:12:35.127338    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:12:35.127349    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:12:35.140923    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:12:35.140934    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:12:35.155855    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:12:35.155867    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:12:35.178065    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:35.178075    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:35.201563    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:12:35.201574    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:12:35.215847    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:12:35.215857    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:12:35.240383    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:12:35.240393    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:12:35.254182    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:12:35.254192    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:12:37.767735    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:35.582551    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:35.582796    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:35.602208    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:12:35.602318    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:35.616708    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:12:35.616801    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:35.628926    9360 logs.go:282] 2 containers: [47a579d7d206 f9e74904e5af]
	I1028 05:12:35.629015    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:35.639863    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:12:35.639941    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:35.650064    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:12:35.650137    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:35.660373    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:12:35.660442    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:35.670595    9360 logs.go:282] 0 containers: []
	W1028 05:12:35.670608    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:35.670677    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:35.680671    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:12:35.680686    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:35.680692    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:35.723495    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:12:35.723508    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:12:35.746425    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:12:35.746436    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:12:35.758586    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:12:35.758600    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:12:35.771456    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:12:35.771470    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:12:35.782864    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:12:35.782875    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:12:35.795908    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:35.795920    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:35.831122    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:35.831136    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:35.835782    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:35.835791    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:35.860579    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:12:35.860588    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:12:35.878254    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:12:35.878263    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:35.889862    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:12:35.889871    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:12:35.904313    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:12:35.904323    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:12:38.423721    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:42.768142    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:42.768399    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:42.786311    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:12:42.786413    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:42.803794    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:12:42.803875    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:42.815400    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:12:42.815477    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:42.826052    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:12:42.826128    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:42.836848    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:12:42.836921    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:42.847796    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:12:42.847875    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:42.857845    9481 logs.go:282] 0 containers: []
	W1028 05:12:42.857856    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:42.857909    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:42.868416    9481 logs.go:282] 0 containers: []
	W1028 05:12:42.868430    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:12:42.868437    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:42.868443    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:42.872687    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:12:42.872693    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:12:42.884638    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:12:42.884648    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:12:42.901956    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:12:42.901968    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:42.915189    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:42.915199    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:42.950469    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:12:42.950484    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:12:42.965593    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:12:42.965606    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:12:42.981257    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:42.981268    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:43.006634    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:43.006643    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:43.045953    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:12:43.045962    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:12:43.070629    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:12:43.070639    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:12:43.084530    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:12:43.084541    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:12:43.096184    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:12:43.096197    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:12:43.110772    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:12:43.110784    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:12:43.127629    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:12:43.127639    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:12:43.425907    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:43.426096    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:43.446566    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:12:43.446649    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:43.458342    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:12:43.458416    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:43.469541    9360 logs.go:282] 2 containers: [47a579d7d206 f9e74904e5af]
	I1028 05:12:43.469621    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:43.487112    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:12:43.487178    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:43.497466    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:12:43.497530    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:43.508369    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:12:43.508448    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:43.518512    9360 logs.go:282] 0 containers: []
	W1028 05:12:43.518523    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:43.518581    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:43.529323    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:12:43.529338    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:12:43.529344    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:12:43.541512    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:12:43.541527    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:12:43.562192    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:12:43.562207    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:12:43.574289    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:43.574301    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:43.578615    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:12:43.578623    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:12:43.593181    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:12:43.593195    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:12:43.605276    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:12:43.605289    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:12:43.623011    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:12:43.623026    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:12:43.634321    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:43.634331    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:43.658769    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:12:43.658778    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:43.670155    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:43.670163    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:43.704795    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:43.704802    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:43.739817    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:12:43.739826    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:12:45.643351    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:46.256180    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:50.645705    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:50.645963    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:50.666288    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:12:50.666390    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:50.680748    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:12:50.680834    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:50.693126    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:12:50.693205    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:50.705170    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:12:50.705244    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:50.715767    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:12:50.715846    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:50.726165    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:12:50.726240    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:50.740651    9481 logs.go:282] 0 containers: []
	W1028 05:12:50.740660    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:50.740720    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:50.751555    9481 logs.go:282] 0 containers: []
	W1028 05:12:50.751565    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:12:50.751574    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:50.751580    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:50.788353    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:12:50.788365    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:12:50.802205    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:12:50.802215    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:12:50.826774    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:12:50.826788    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:12:50.838215    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:12:50.838226    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:12:50.852266    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:12:50.852278    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:12:50.864154    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:12:50.864166    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:12:50.886401    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:12:50.886411    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:50.898418    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:50.898429    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:50.932646    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:12:50.932658    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:12:50.949119    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:12:50.949133    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:12:50.961089    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:12:50.961101    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:12:50.977627    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:50.977639    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:51.001430    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:51.001437    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:51.005976    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:12:51.005982    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:12:53.522787    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:51.258297    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:51.258429    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:51.271736    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:12:51.271822    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:51.282440    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:12:51.282521    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:51.293045    9360 logs.go:282] 2 containers: [47a579d7d206 f9e74904e5af]
	I1028 05:12:51.293124    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:51.303833    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:12:51.303906    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:51.314062    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:12:51.314141    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:51.324790    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:12:51.324868    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:51.334793    9360 logs.go:282] 0 containers: []
	W1028 05:12:51.334803    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:51.334869    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:51.345414    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:12:51.345433    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:12:51.345438    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:12:51.360288    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:12:51.360299    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:12:51.372330    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:12:51.372339    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:12:51.387088    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:12:51.387099    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:12:51.401666    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:51.401675    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:51.438082    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:12:51.438091    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:12:51.457197    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:12:51.457209    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:12:51.469485    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:12:51.469498    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:12:51.486531    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:12:51.486541    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:12:51.497681    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:51.497692    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:51.522585    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:51.522593    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:51.556394    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:51.556404    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:51.561535    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:12:51.561546    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:54.075351    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:58.524998    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:58.525250    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:58.547492    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:12:58.547599    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:58.562492    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:12:58.562580    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:58.574801    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:12:58.574873    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:58.585940    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:12:58.586020    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:58.596368    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:12:58.596448    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:58.610827    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:12:58.610894    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:58.624724    9481 logs.go:282] 0 containers: []
	W1028 05:12:58.624738    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:58.624810    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:58.635487    9481 logs.go:282] 0 containers: []
	W1028 05:12:58.635502    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:12:58.635510    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:12:58.635516    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:12:59.077650    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:59.077798    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:59.090181    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:12:59.090265    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:59.100863    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:12:59.100951    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:59.112856    9360 logs.go:282] 2 containers: [47a579d7d206 f9e74904e5af]
	I1028 05:12:59.112931    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:59.127668    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:12:59.127743    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:59.138270    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:12:59.138347    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:59.148646    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:12:59.148723    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:59.159215    9360 logs.go:282] 0 containers: []
	W1028 05:12:59.159226    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:59.159283    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:59.169216    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:12:59.169230    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:12:59.169236    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:12:59.183642    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:12:59.183654    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:12:59.201926    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:12:59.201938    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:12:59.213761    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:12:59.213771    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:12:59.231174    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:59.231188    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:59.255936    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:59.255943    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:59.292837    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:59.292847    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:59.297408    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:12:59.297418    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:12:59.311470    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:12:59.311480    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:12:59.323370    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:12:59.323381    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:12:59.335057    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:12:59.335068    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:12:59.346109    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:12:59.346118    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:59.357355    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:59.357364    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:58.647284    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:12:58.647295    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:12:58.658741    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:12:58.658752    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:12:58.675956    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:58.675966    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:58.714656    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:58.714664    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:58.718533    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:58.718541    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:58.758207    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:12:58.758221    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:12:58.772208    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:12:58.772217    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:12:58.785409    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:12:58.785420    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:12:58.799565    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:12:58.799576    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:12:58.813547    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:12:58.813557    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:12:58.824724    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:12:58.824734    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:58.836683    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:12:58.836694    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:12:58.865270    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:12:58.865280    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:12:58.880192    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:58.880203    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:01.408846    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:01.894626    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:06.411074    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:06.411201    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:06.422360    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:13:06.422457    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:06.436561    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:13:06.436642    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:06.447104    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:13:06.447181    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:06.457518    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:13:06.457587    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:06.468268    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:13:06.468334    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:06.478665    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:13:06.478738    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:06.490156    9481 logs.go:282] 0 containers: []
	W1028 05:13:06.490167    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:06.490232    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:06.504210    9481 logs.go:282] 0 containers: []
	W1028 05:13:06.504224    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:13:06.504234    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:13:06.504241    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:13:06.518844    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:13:06.518854    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:13:06.531880    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:13:06.531891    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:13:06.546865    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:13:06.546875    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:13:06.558637    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:13:06.558647    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:13:06.572134    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:13:06.572146    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:13:06.586728    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:13:06.586739    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:13:06.601407    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:13:06.601419    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:13:06.612839    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:13:06.612849    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:13:06.630168    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:13:06.630177    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:06.641667    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:06.641677    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:06.680646    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:06.680653    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:06.704077    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:06.704084    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:06.739577    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:13:06.739589    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:13:06.763673    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:06.763683    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:06.897167    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:06.897286    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:06.910897    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:13:06.910983    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:06.923541    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:13:06.923615    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:06.933767    9360 logs.go:282] 2 containers: [47a579d7d206 f9e74904e5af]
	I1028 05:13:06.933845    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:06.944525    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:13:06.944598    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:06.954959    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:13:06.955029    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:06.965197    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:13:06.965270    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:06.975414    9360 logs.go:282] 0 containers: []
	W1028 05:13:06.975427    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:06.975486    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:06.985849    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:13:06.985862    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:13:06.985868    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:13:06.999517    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:13:06.999526    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:13:07.011067    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:13:07.011078    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:13:07.022669    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:13:07.022679    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:13:07.045285    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:07.045297    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:07.070394    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:07.070406    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:07.104856    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:07.104866    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:07.141096    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:13:07.141107    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:13:07.155877    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:13:07.155889    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:13:07.170826    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:13:07.170837    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:13:07.182680    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:13:07.182695    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:13:07.193893    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:13:07.193904    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:07.205890    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:07.205900    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:09.269804    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:09.712337    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:14.272014    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:14.272193    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:14.287757    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:13:14.287841    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:14.298369    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:13:14.298452    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:14.309076    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:13:14.309146    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:14.327264    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:13:14.327345    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:14.337538    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:13:14.337602    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:14.347644    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:13:14.347717    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:14.357954    9481 logs.go:282] 0 containers: []
	W1028 05:13:14.357968    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:14.358034    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:14.368701    9481 logs.go:282] 0 containers: []
	W1028 05:13:14.368711    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:13:14.368720    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:13:14.368725    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:13:14.379927    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:13:14.379940    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:13:14.391672    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:13:14.391684    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:13:14.403762    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:14.403772    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:14.426403    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:13:14.426410    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:13:14.443534    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:13:14.443546    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:13:14.468778    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:13:14.468788    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:13:14.484994    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:13:14.485004    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:13:14.499278    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:13:14.499288    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:13:14.512223    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:14.512232    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:14.548313    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:13:14.548324    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:13:14.565524    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:14.565534    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:14.602801    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:14.602809    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:14.606880    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:13:14.606889    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:13:14.622770    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:13:14.622781    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:17.136541    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:14.714535    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:14.714691    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:14.726132    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:13:14.726214    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:14.736514    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:13:14.736587    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:14.747163    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:13:14.747233    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:14.758695    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:13:14.758768    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:14.768927    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:13:14.769030    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:14.779235    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:13:14.779302    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:14.790102    9360 logs.go:282] 0 containers: []
	W1028 05:13:14.790114    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:14.790188    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:14.800485    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:13:14.800502    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:13:14.800508    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:14.812330    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:14.812339    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:14.816970    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:13:14.816976    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:13:14.830858    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:13:14.830868    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:13:14.842418    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:13:14.842429    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:13:14.854092    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:13:14.854105    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:13:14.868927    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:13:14.868938    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:13:14.880791    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:14.880802    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:14.904953    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:13:14.904962    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:13:14.922427    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:14.922437    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:14.955348    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:14.955354    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:14.991280    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:13:14.991295    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:13:15.007660    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:13:15.007670    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:13:15.021749    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:13:15.021759    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:13:15.033165    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:13:15.033177    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:13:17.550489    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:22.138660    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:22.138888    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:22.174728    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:13:22.174821    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:22.200392    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:13:22.200474    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:22.211567    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:13:22.211642    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:22.225978    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:13:22.226060    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:22.237023    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:13:22.237096    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:22.247718    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:13:22.247796    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:22.258168    9481 logs.go:282] 0 containers: []
	W1028 05:13:22.258182    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:22.258253    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:22.268151    9481 logs.go:282] 0 containers: []
	W1028 05:13:22.268163    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:13:22.268172    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:13:22.268178    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:13:22.292338    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:13:22.292351    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:13:22.311799    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:13:22.311812    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:13:22.327534    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:13:22.327547    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:13:22.339626    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:22.339640    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:22.377694    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:13:22.377703    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:13:22.393849    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:22.393862    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:22.429521    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:13:22.429534    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:13:22.444396    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:13:22.444409    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:13:22.462772    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:13:22.462782    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:13:22.476660    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:22.476675    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:22.481383    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:13:22.481389    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:13:22.496602    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:22.496612    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:22.520849    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:13:22.520856    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:22.532554    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:13:22.532564    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:13:22.552114    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:22.552224    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:22.562781    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:13:22.562859    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:22.573472    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:13:22.573557    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:22.584687    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:13:22.584767    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:22.595573    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:13:22.595644    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:22.615700    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:13:22.615775    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:22.626537    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:13:22.626608    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:22.636830    9360 logs.go:282] 0 containers: []
	W1028 05:13:22.636848    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:22.636907    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:22.647210    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:13:22.647228    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:22.647234    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:22.681847    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:13:22.681857    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:13:22.695655    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:13:22.695671    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:13:22.707602    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:13:22.707612    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:13:22.719053    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:13:22.719063    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:22.732134    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:13:22.732146    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:13:22.747022    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:13:22.747031    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:13:22.758932    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:13:22.758944    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:13:22.771465    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:22.771482    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:22.806117    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:13:22.806130    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:13:22.820670    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:13:22.820685    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:13:22.838492    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:22.838501    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:22.864997    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:22.865007    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:22.869283    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:13:22.869289    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:13:22.880943    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:13:22.880953    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:13:25.049564    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:25.394664    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:30.050409    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:30.051078    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:30.090603    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:13:30.090756    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:30.112165    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:13:30.112267    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:30.129075    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:13:30.129166    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:30.141681    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:13:30.141762    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:30.152710    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:13:30.152788    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:30.163612    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:13:30.163694    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:30.173800    9481 logs.go:282] 0 containers: []
	W1028 05:13:30.173819    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:30.173891    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:30.185446    9481 logs.go:282] 0 containers: []
	W1028 05:13:30.185457    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:13:30.185467    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:13:30.185472    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:13:30.203323    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:13:30.203335    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:13:30.214988    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:13:30.214999    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:13:30.240868    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:13:30.240878    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:13:30.257979    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:13:30.257993    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:13:30.276166    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:13:30.276177    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:13:30.290306    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:13:30.290316    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:13:30.305440    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:13:30.305452    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:13:30.322913    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:30.322923    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:30.327183    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:30.327189    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:30.362439    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:13:30.362455    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:13:30.380122    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:13:30.380133    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:13:30.393954    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:30.393964    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:30.418047    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:13:30.418061    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:30.430439    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:30.430452    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:32.972467    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:30.397115    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:30.397211    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:30.408665    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:13:30.408746    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:30.423272    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:13:30.423354    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:30.435201    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:13:30.435285    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:30.448303    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:13:30.448383    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:30.462355    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:13:30.462426    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:30.473298    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:13:30.473368    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:30.483349    9360 logs.go:282] 0 containers: []
	W1028 05:13:30.483368    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:30.483445    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:30.494148    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:13:30.494166    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:13:30.494171    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:13:30.505934    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:30.505944    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:30.541091    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:13:30.541098    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:13:30.554636    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:13:30.554649    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:13:30.572051    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:13:30.572060    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:13:30.583372    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:13:30.583384    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:13:30.595419    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:13:30.595433    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:13:30.610745    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:13:30.610755    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:13:30.626518    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:13:30.626527    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:13:30.642330    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:30.642344    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:30.668046    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:13:30.668053    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:30.679534    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:30.679547    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:30.684324    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:30.684329    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:30.719581    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:13:30.719590    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:13:30.734490    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:13:30.734500    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:13:33.248352    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:37.975190    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:37.975547    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:38.003922    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:13:38.004070    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:38.022445    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:13:38.022550    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:38.036007    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:13:38.036095    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:38.047613    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:13:38.047686    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:38.058208    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:13:38.058286    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:38.068862    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:13:38.068930    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:38.079663    9481 logs.go:282] 0 containers: []
	W1028 05:13:38.079674    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:38.079743    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:38.089664    9481 logs.go:282] 0 containers: []
	W1028 05:13:38.089674    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:13:38.089683    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:38.089689    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:38.093876    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:13:38.093884    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:13:38.110955    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:13:38.110965    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:13:38.126853    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:38.126863    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:38.149490    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:13:38.149501    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:13:38.161456    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:13:38.161468    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:38.172993    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:38.173003    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:38.207925    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:13:38.207935    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:13:38.233682    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:13:38.233692    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:13:38.267882    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:38.267892    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:38.307861    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:13:38.307883    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:13:38.322996    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:13:38.323006    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:13:38.352856    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:13:38.352869    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:13:38.365494    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:13:38.365508    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:13:38.377568    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:13:38.377580    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:13:38.250458    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:38.250564    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:38.261546    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:13:38.261623    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:38.272525    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:13:38.272606    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:38.284482    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:13:38.284567    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:38.295381    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:13:38.295459    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:38.309049    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:13:38.309179    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:38.320413    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:13:38.320496    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:38.331701    9360 logs.go:282] 0 containers: []
	W1028 05:13:38.331711    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:38.331774    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:38.344417    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:13:38.344434    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:38.344439    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:38.380544    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:13:38.380556    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:13:38.408643    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:38.408653    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:38.433612    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:38.433619    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:38.467858    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:13:38.467871    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:13:38.481588    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:13:38.481604    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:13:38.493103    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:13:38.493113    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:13:38.504898    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:13:38.504914    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:38.516494    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:13:38.516509    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:13:38.530724    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:13:38.530734    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:13:38.542661    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:13:38.542671    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:13:38.561676    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:13:38.561686    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:13:38.576395    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:38.576405    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:38.581042    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:13:38.581049    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:13:38.595770    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:13:38.595780    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:13:40.898574    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:41.115406    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:45.901249    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:45.901708    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:45.931810    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:13:45.931960    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:45.956318    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:13:45.956413    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:45.973923    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:13:45.973998    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:45.984176    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:13:45.984262    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:45.994568    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:13:45.994646    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:46.005235    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:13:46.005303    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:46.015756    9481 logs.go:282] 0 containers: []
	W1028 05:13:46.015768    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:46.015836    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:46.026296    9481 logs.go:282] 0 containers: []
	W1028 05:13:46.026307    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:13:46.026316    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:13:46.026321    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:13:46.050955    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:13:46.050964    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:13:46.064931    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:46.064941    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:46.102234    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:46.102245    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:46.106677    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:13:46.106686    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:13:46.124249    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:13:46.124262    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:13:46.136384    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:13:46.136394    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:13:46.149825    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:13:46.149834    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:13:46.165819    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:13:46.165832    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:13:46.183808    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:13:46.183817    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:13:46.198391    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:46.198402    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:46.236741    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:13:46.236752    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:13:46.252745    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:13:46.252760    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:13:46.264862    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:46.264873    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:46.288885    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:13:46.288902    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:46.117862    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:46.117964    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:46.134110    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:13:46.134194    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:46.146879    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:13:46.146965    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:46.158817    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:13:46.158906    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:46.170101    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:13:46.170192    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:46.182373    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:13:46.182450    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:46.194456    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:13:46.194544    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:46.205993    9360 logs.go:282] 0 containers: []
	W1028 05:13:46.206006    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:46.206078    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:46.217351    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:13:46.217371    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:13:46.217376    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:13:46.232263    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:13:46.232278    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:13:46.244666    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:13:46.244678    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:13:46.260684    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:46.260698    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:46.295765    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:13:46.295781    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:46.309292    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:46.309304    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:46.313905    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:13:46.313915    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:13:46.329065    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:13:46.329086    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:13:46.342299    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:13:46.342310    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:13:46.360617    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:46.360631    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:46.396618    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:13:46.396627    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:13:46.408925    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:13:46.408939    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:13:46.420677    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:13:46.420691    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:13:46.439504    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:13:46.439515    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:13:46.450945    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:46.450955    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:48.979016    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:48.826941    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:53.980079    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:53.980179    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:53.992976    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:13:53.993058    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:54.007898    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:13:54.007990    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:54.021042    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:13:54.021126    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:54.032559    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:13:54.032634    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:54.043785    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:13:54.043861    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:54.054734    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:13:54.054812    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:54.066435    9360 logs.go:282] 0 containers: []
	W1028 05:13:54.066447    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:54.066516    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:54.077750    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:13:54.077767    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:13:54.077772    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:13:54.095484    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:13:54.095495    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:13:54.118334    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:54.118348    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:54.156439    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:54.156457    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:54.193942    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:13:54.193953    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:13:54.209382    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:13:54.209393    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:13:54.222472    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:13:54.222485    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:13:54.235087    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:13:54.235097    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:13:54.256374    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:13:54.256388    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:13:54.270528    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:54.270542    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:54.296138    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:54.296146    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:54.300423    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:13:54.300428    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:13:54.312398    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:13:54.312408    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:13:54.324969    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:13:54.324978    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:13:54.336649    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:13:54.336658    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:53.829230    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:53.829533    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:53.855184    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:13:53.855314    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:53.872704    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:13:53.872804    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:53.886656    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:13:53.886730    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:53.901157    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:13:53.901230    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:53.912484    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:13:53.912567    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:53.923042    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:13:53.923122    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:53.932606    9481 logs.go:282] 0 containers: []
	W1028 05:13:53.932623    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:53.932681    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:53.943315    9481 logs.go:282] 0 containers: []
	W1028 05:13:53.943326    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:13:53.943334    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:13:53.943339    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:13:53.954706    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:13:53.954716    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:13:53.966548    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:13:53.966559    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:13:53.979519    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:53.979528    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:53.983991    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:13:53.984001    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:13:54.010149    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:13:54.010159    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:13:54.029062    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:54.029074    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:54.053824    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:54.053842    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:54.092818    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:13:54.092832    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:13:54.112716    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:13:54.112730    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:13:54.126554    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:13:54.126566    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:13:54.144002    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:13:54.144013    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:13:54.162741    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:13:54.162755    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:54.175637    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:54.175650    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:54.216984    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:13:54.217002    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:13:56.734684    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:56.852244    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:01.737269    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:01.737612    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:14:01.764035    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:14:01.764172    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:14:01.781472    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:14:01.781570    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:14:01.794874    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:14:01.794952    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:14:01.806194    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:14:01.806277    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:14:01.817204    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:14:01.817294    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:14:01.827893    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:14:01.827971    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:14:01.838675    9481 logs.go:282] 0 containers: []
	W1028 05:14:01.838686    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:14:01.838752    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:14:01.848936    9481 logs.go:282] 0 containers: []
	W1028 05:14:01.848947    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:14:01.848956    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:14:01.848962    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:14:01.890213    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:14:01.890225    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:14:01.916866    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:14:01.916883    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:14:01.932191    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:14:01.932201    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:14:01.949705    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:14:01.949717    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:14:01.973757    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:14:01.973777    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:14:02.013239    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:14:02.013251    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:14:02.031920    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:14:02.031932    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:14:02.046833    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:14:02.046850    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:14:02.059901    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:14:02.059915    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:14:02.073066    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:14:02.073078    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:14:02.077534    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:14:02.077546    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:14:02.090120    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:14:02.090131    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:14:02.106369    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:14:02.106384    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:14:02.127217    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:14:02.127231    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:14:01.854777    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:01.854871    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:14:01.865959    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:14:01.866043    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:14:01.877854    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:14:01.877932    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:14:01.891561    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:14:01.891643    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:14:01.903739    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:14:01.903822    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:14:01.917025    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:14:01.917102    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:14:01.928694    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:14:01.928776    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:14:01.939776    9360 logs.go:282] 0 containers: []
	W1028 05:14:01.939788    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:14:01.939863    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:14:01.951286    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:14:01.951304    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:14:01.951309    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:14:01.963954    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:14:01.963964    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:14:01.976031    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:14:01.976041    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:14:01.993956    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:14:01.993970    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:14:02.019446    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:14:02.019459    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:14:02.024145    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:14:02.024151    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:14:02.039248    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:14:02.039262    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:14:02.051832    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:14:02.051846    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:14:02.067704    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:14:02.067718    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:14:02.085495    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:14:02.085506    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:14:02.099312    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:14:02.099325    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:14:02.146716    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:14:02.146730    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:14:02.161630    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:14:02.161645    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:14:02.174435    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:14:02.174450    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:14:02.209892    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:14:02.209901    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:14:04.641697    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:04.723610    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:09.643885    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:09.644045    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:14:09.656203    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:14:09.656286    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:14:09.667273    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:14:09.667358    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:14:09.681968    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:14:09.682044    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:14:09.692580    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:14:09.692658    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:14:09.703147    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:14:09.703217    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:14:09.714062    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:14:09.714128    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:14:09.724210    9481 logs.go:282] 0 containers: []
	W1028 05:14:09.724223    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:14:09.724291    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:14:09.735547    9481 logs.go:282] 0 containers: []
	W1028 05:14:09.735561    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:14:09.735570    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:14:09.735576    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:14:09.779006    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:14:09.779029    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:14:09.784110    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:14:09.784119    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:14:09.800999    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:14:09.801015    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:14:09.823544    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:14:09.823554    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:14:09.847745    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:14:09.847759    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:14:09.887953    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:14:09.887965    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:14:09.904406    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:14:09.904417    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:14:09.916897    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:14:09.916910    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:14:09.943238    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:14:09.943252    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:14:09.959416    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:14:09.959428    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:14:09.974240    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:14:09.974251    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:14:09.987232    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:14:09.987243    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:14:10.000816    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:14:10.000831    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:14:10.013724    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:14:10.013741    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:14:12.539100    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:09.725679    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:09.725749    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:14:09.736762    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:14:09.736835    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:14:09.747731    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:14:09.747808    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:14:09.759072    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:14:09.759148    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:14:09.769918    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:14:09.769997    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:14:09.783580    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:14:09.783658    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:14:09.795528    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:14:09.795608    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:14:09.806890    9360 logs.go:282] 0 containers: []
	W1028 05:14:09.806903    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:14:09.806969    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:14:09.818297    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:14:09.818330    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:14:09.818337    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:14:09.832702    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:14:09.832715    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:14:09.844633    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:14:09.844644    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:14:09.857054    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:14:09.857065    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:14:09.872566    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:14:09.872577    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:14:09.891309    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:14:09.891322    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:14:09.917316    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:14:09.917325    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:14:09.922387    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:14:09.922399    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:14:09.940811    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:14:09.940823    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:14:09.953889    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:14:09.953902    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:14:09.966903    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:14:09.966915    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:14:09.987480    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:14:09.987489    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:14:10.003211    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:14:10.003222    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:14:10.016239    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:14:10.016249    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:14:10.052654    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:14:10.052669    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:14:12.590689    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:17.541459    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:17.541963    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:14:17.581917    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:14:17.582038    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:14:17.600249    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:14:17.600384    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:14:17.614282    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:14:17.614340    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:14:17.626445    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:14:17.626497    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:14:17.638204    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:14:17.638256    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:14:17.649855    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:14:17.649910    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:14:17.661612    9481 logs.go:282] 0 containers: []
	W1028 05:14:17.661620    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:14:17.661661    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:14:17.672714    9481 logs.go:282] 0 containers: []
	W1028 05:14:17.672724    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:14:17.672731    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:14:17.672738    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:14:17.710645    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:14:17.710663    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:14:17.724604    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:14:17.724615    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:14:17.742972    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:14:17.742986    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:14:17.755323    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:14:17.755337    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:14:17.782295    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:14:17.782314    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:14:17.797128    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:14:17.797142    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:14:17.801662    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:14:17.801670    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:14:17.816573    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:14:17.816582    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:14:17.831920    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:14:17.831932    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:14:17.848425    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:14:17.848434    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:14:17.863373    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:14:17.863383    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:14:17.888683    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:14:17.888692    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:14:17.931615    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:14:17.931633    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:14:17.948342    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:14:17.948355    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:14:17.591997    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:17.592118    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:14:17.613072    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:14:17.613162    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:14:17.625683    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:14:17.625773    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:14:17.636953    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:14:17.637056    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:14:17.648755    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:14:17.648834    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:14:17.660692    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:14:17.660770    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:14:17.672593    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:14:17.672671    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:14:17.692420    9360 logs.go:282] 0 containers: []
	W1028 05:14:17.692434    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:14:17.692498    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:14:17.703623    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:14:17.703643    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:14:17.703649    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:14:17.722739    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:14:17.722754    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:14:17.760040    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:14:17.760054    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:14:17.774915    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:14:17.774930    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:14:17.787639    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:14:17.787652    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:14:17.801048    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:14:17.801062    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:14:17.813939    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:14:17.813956    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:14:17.827163    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:14:17.827178    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:14:17.842848    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:14:17.842862    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:14:17.847554    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:14:17.847563    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:14:17.859822    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:14:17.859833    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:14:17.885881    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:14:17.885893    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:14:17.922983    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:14:17.922998    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:14:17.938776    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:14:17.938792    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:14:17.952366    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:14:17.952379    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:14:20.462579    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:20.467105    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:25.465341    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:25.465511    9481 kubeadm.go:597] duration metric: took 4m3.361189583s to restartPrimaryControlPlane
	W1028 05:14:25.465660    9481 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 05:14:25.465717    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1028 05:14:26.482087    9481 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.016377708s)
	I1028 05:14:26.482160    9481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 05:14:26.487166    9481 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 05:14:26.490201    9481 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 05:14:26.492900    9481 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 05:14:26.492907    9481 kubeadm.go:157] found existing configuration files:
	
	I1028 05:14:26.492936    9481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/admin.conf
	I1028 05:14:26.495440    9481 kubeadm.go:163] "https://control-plane.minikube.internal:58252" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 05:14:26.495470    9481 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 05:14:26.498724    9481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/kubelet.conf
	I1028 05:14:26.501590    9481 kubeadm.go:163] "https://control-plane.minikube.internal:58252" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 05:14:26.501615    9481 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 05:14:26.504387    9481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/controller-manager.conf
	I1028 05:14:26.507149    9481 kubeadm.go:163] "https://control-plane.minikube.internal:58252" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 05:14:26.507183    9481 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 05:14:26.510278    9481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/scheduler.conf
	I1028 05:14:26.513035    9481 kubeadm.go:163] "https://control-plane.minikube.internal:58252" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 05:14:26.513069    9481 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 05:14:26.515718    9481 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 05:14:26.534934    9481 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1028 05:14:26.535039    9481 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 05:14:26.590171    9481 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 05:14:26.590226    9481 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 05:14:26.590277    9481 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 05:14:26.642809    9481 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 05:14:26.647017    9481 out.go:235]   - Generating certificates and keys ...
	I1028 05:14:26.647061    9481 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 05:14:26.647102    9481 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 05:14:26.647141    9481 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 05:14:26.647173    9481 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 05:14:26.647212    9481 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 05:14:26.647264    9481 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 05:14:26.647308    9481 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 05:14:26.647351    9481 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 05:14:26.647388    9481 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 05:14:26.647426    9481 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 05:14:26.647444    9481 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 05:14:26.647469    9481 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 05:14:26.682146    9481 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 05:14:26.728942    9481 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 05:14:26.805502    9481 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 05:14:26.903231    9481 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 05:14:26.937209    9481 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 05:14:26.937578    9481 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 05:14:26.937642    9481 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 05:14:27.029289    9481 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 05:14:27.033231    9481 out.go:235]   - Booting up control plane ...
	I1028 05:14:27.033277    9481 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 05:14:27.033326    9481 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 05:14:27.033361    9481 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 05:14:27.033399    9481 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 05:14:27.033762    9481 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 05:14:25.468194    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:25.468596    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:14:25.517113    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:14:25.517222    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:14:25.535597    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:14:25.535701    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:14:25.551464    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:14:25.551552    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:14:25.563236    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:14:25.563311    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:14:25.575597    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:14:25.575675    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:14:25.587320    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:14:25.587395    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:14:25.598545    9360 logs.go:282] 0 containers: []
	W1028 05:14:25.598558    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:14:25.598622    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:14:25.611214    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:14:25.611234    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:14:25.611240    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:14:25.645117    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:14:25.645131    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:14:25.657065    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:14:25.657078    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:14:25.672535    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:14:25.672557    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:14:25.685607    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:14:25.685619    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:14:25.690863    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:14:25.690875    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:14:25.730215    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:14:25.730228    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:14:25.745810    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:14:25.745823    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:14:25.758319    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:14:25.758333    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:14:25.772191    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:14:25.772204    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:14:25.785580    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:14:25.785593    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:14:25.798142    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:14:25.798154    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:14:25.824352    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:14:25.824368    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:14:25.840058    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:14:25.840069    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:14:25.852418    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:14:25.852431    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:14:28.372470    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:31.035809    9481 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001872 seconds
	I1028 05:14:31.035873    9481 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 05:14:31.039422    9481 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 05:14:31.551281    9481 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 05:14:31.551510    9481 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-451000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 05:14:32.055697    9481 kubeadm.go:310] [bootstrap-token] Using token: 6anzvo.rhr2ma4rf8dnbyau
	I1028 05:14:32.062178    9481 out.go:235]   - Configuring RBAC rules ...
	I1028 05:14:32.062247    9481 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 05:14:32.062294    9481 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 05:14:32.064627    9481 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 05:14:32.068573    9481 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 05:14:32.069518    9481 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 05:14:32.070454    9481 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 05:14:32.073628    9481 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 05:14:32.226941    9481 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 05:14:32.461823    9481 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 05:14:32.462341    9481 kubeadm.go:310] 
	I1028 05:14:32.462378    9481 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 05:14:32.462384    9481 kubeadm.go:310] 
	I1028 05:14:32.462427    9481 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 05:14:32.462431    9481 kubeadm.go:310] 
	I1028 05:14:32.462448    9481 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 05:14:32.462483    9481 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 05:14:32.462515    9481 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 05:14:32.462530    9481 kubeadm.go:310] 
	I1028 05:14:32.462577    9481 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 05:14:32.462582    9481 kubeadm.go:310] 
	I1028 05:14:32.462611    9481 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 05:14:32.462616    9481 kubeadm.go:310] 
	I1028 05:14:32.462647    9481 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 05:14:32.462714    9481 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 05:14:32.462773    9481 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 05:14:32.462780    9481 kubeadm.go:310] 
	I1028 05:14:32.462851    9481 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 05:14:32.462908    9481 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 05:14:32.462914    9481 kubeadm.go:310] 
	I1028 05:14:32.462974    9481 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6anzvo.rhr2ma4rf8dnbyau \
	I1028 05:14:32.463038    9481 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f88359f6e177cdd7e99d84e4e5a9c564acd9fd4ce77f443c1fef5fb70f89e325 \
	I1028 05:14:32.463064    9481 kubeadm.go:310] 	--control-plane 
	I1028 05:14:32.463067    9481 kubeadm.go:310] 
	I1028 05:14:32.463120    9481 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 05:14:32.463124    9481 kubeadm.go:310] 
	I1028 05:14:32.463180    9481 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6anzvo.rhr2ma4rf8dnbyau \
	I1028 05:14:32.463244    9481 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f88359f6e177cdd7e99d84e4e5a9c564acd9fd4ce77f443c1fef5fb70f89e325 
	I1028 05:14:32.463371    9481 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 05:14:32.463383    9481 cni.go:84] Creating CNI manager for ""
	I1028 05:14:32.463392    9481 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 05:14:32.467581    9481 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 05:14:32.474732    9481 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 05:14:32.478140    9481 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 05:14:32.483917    9481 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 05:14:32.484000    9481 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 05:14:32.484027    9481 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-451000 minikube.k8s.io/updated_at=2024_10_28T05_14_32_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f minikube.k8s.io/name=stopped-upgrade-451000 minikube.k8s.io/primary=true
	I1028 05:14:32.532876    9481 ops.go:34] apiserver oom_adj: -16
	I1028 05:14:32.532882    9481 kubeadm.go:1113] duration metric: took 48.929333ms to wait for elevateKubeSystemPrivileges
	I1028 05:14:32.532890    9481 kubeadm.go:394] duration metric: took 4m10.441555458s to StartCluster
	I1028 05:14:32.532908    9481 settings.go:142] acquiring lock: {Name:mka2e81574940ea53fced239aa2ef4cd7479a0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:14:32.533011    9481 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:14:32.533467    9481 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/kubeconfig: {Name:mk90a124f6c448e81120cf90ba82d6374e9cd851 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:14:32.533689    9481 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:14:32.533694    9481 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 05:14:32.533733    9481 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-451000"
	I1028 05:14:32.533740    9481 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-451000"
	W1028 05:14:32.533743    9481 addons.go:243] addon storage-provisioner should already be in state true
	I1028 05:14:32.533755    9481 host.go:66] Checking if "stopped-upgrade-451000" exists ...
	I1028 05:14:32.533762    9481 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-451000"
	I1028 05:14:32.533772    9481 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-451000"
	I1028 05:14:32.533835    9481 config.go:182] Loaded profile config "stopped-upgrade-451000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 05:14:32.536491    9481 out.go:177] * Verifying Kubernetes components...
	I1028 05:14:32.537163    9481 kapi.go:59] client config for stopped-upgrade-451000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/client.key", CAFile:"/Users/jenkins/minikube-integration/19875-6942/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104a72680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 05:14:32.540960    9481 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-451000"
	W1028 05:14:32.540964    9481 addons.go:243] addon default-storageclass should already be in state true
	I1028 05:14:32.540972    9481 host.go:66] Checking if "stopped-upgrade-451000" exists ...
	I1028 05:14:32.541486    9481 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 05:14:32.541491    9481 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 05:14:32.541497    9481 sshutil.go:53] new ssh client: &{IP:localhost Port:58218 SSHKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/stopped-upgrade-451000/id_rsa Username:docker}
	I1028 05:14:32.544569    9481 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 05:14:32.548563    9481 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 05:14:32.552589    9481 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 05:14:32.552595    9481 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 05:14:32.552602    9481 sshutil.go:53] new ssh client: &{IP:localhost Port:58218 SSHKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/stopped-upgrade-451000/id_rsa Username:docker}
	I1028 05:14:32.641269    9481 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 05:14:32.645951    9481 api_server.go:52] waiting for apiserver process to appear ...
	I1028 05:14:32.646002    9481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 05:14:32.650029    9481 api_server.go:72] duration metric: took 116.331125ms to wait for apiserver process to appear ...
	I1028 05:14:32.650037    9481 api_server.go:88] waiting for apiserver healthz status ...
	I1028 05:14:32.650045    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:32.663958    9481 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 05:14:32.684371    9481 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 05:14:33.033580    9481 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 05:14:33.033592    9481 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 05:14:33.372635    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:33.372829    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:14:33.386765    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:14:33.386846    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:14:33.397524    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:14:33.397600    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:14:33.408517    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:14:33.408601    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:14:33.423301    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:14:33.423382    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:14:33.433445    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:14:33.433517    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:14:33.444301    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:14:33.444373    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:14:33.454328    9360 logs.go:282] 0 containers: []
	W1028 05:14:33.454339    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:14:33.454406    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:14:33.472771    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:14:33.472788    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:14:33.472794    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:14:33.509580    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:14:33.509591    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:14:33.524375    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:14:33.524402    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:14:33.539396    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:14:33.539404    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:14:33.557256    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:14:33.557267    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:14:33.590469    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:14:33.590477    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:14:33.604354    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:14:33.604364    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:14:33.615857    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:14:33.615869    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:14:33.627829    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:14:33.627840    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:14:33.639462    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:14:33.639471    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:14:33.652075    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:14:33.652086    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:14:33.663578    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:14:33.663588    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:14:33.702820    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:14:33.702832    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:14:33.711669    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:14:33.711680    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:14:33.726326    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:14:33.726337    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:14:37.652068    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:37.652114    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:36.253226    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:42.652458    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:42.652483    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:41.255509    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:41.255693    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:14:41.267667    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:14:41.267754    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:14:41.278075    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:14:41.278159    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:14:41.288715    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:14:41.288790    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:14:41.303383    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:14:41.303464    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:14:41.313802    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:14:41.313888    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:14:41.324059    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:14:41.324136    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:14:41.334974    9360 logs.go:282] 0 containers: []
	W1028 05:14:41.334984    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:14:41.335050    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:14:41.345114    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:14:41.345131    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:14:41.345136    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:14:41.360877    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:14:41.360887    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:14:41.381309    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:14:41.381320    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:14:41.394225    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:14:41.394237    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:14:41.428821    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:14:41.428832    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:14:41.443299    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:14:41.443309    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:14:41.454781    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:14:41.454793    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:14:41.468343    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:14:41.468352    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:14:41.491888    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:14:41.491896    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:14:41.503747    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:14:41.503758    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:14:41.537158    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:14:41.537165    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:14:41.541641    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:14:41.541652    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:14:41.553545    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:14:41.553559    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:14:41.567561    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:14:41.567570    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:14:41.579259    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:14:41.579270    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:14:44.095563    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:47.652775    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:47.652816    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:49.097649    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:49.097757    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:14:49.109544    9360 logs.go:282] 1 containers: [88ba9432ba34]
	I1028 05:14:49.109627    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:14:49.120005    9360 logs.go:282] 1 containers: [0a656fe11ed0]
	I1028 05:14:49.120080    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:14:49.131759    9360 logs.go:282] 4 containers: [3dd184e63b82 64035de96af3 47a579d7d206 f9e74904e5af]
	I1028 05:14:49.131840    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:14:49.141854    9360 logs.go:282] 1 containers: [614a27551ac7]
	I1028 05:14:49.141932    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:14:49.151698    9360 logs.go:282] 1 containers: [d49fc92a5ada]
	I1028 05:14:49.151768    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:14:49.162380    9360 logs.go:282] 1 containers: [c04ce5b7e947]
	I1028 05:14:49.162453    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:14:49.172428    9360 logs.go:282] 0 containers: []
	W1028 05:14:49.172442    9360 logs.go:284] No container was found matching "kindnet"
	I1028 05:14:49.172505    9360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:14:49.183471    9360 logs.go:282] 1 containers: [a5e3ea6df78b]
	I1028 05:14:49.183488    9360 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:14:49.183494    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:14:49.220364    9360 logs.go:123] Gathering logs for coredns [3dd184e63b82] ...
	I1028 05:14:49.220375    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd184e63b82"
	I1028 05:14:49.232720    9360 logs.go:123] Gathering logs for coredns [64035de96af3] ...
	I1028 05:14:49.232730    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64035de96af3"
	I1028 05:14:49.244589    9360 logs.go:123] Gathering logs for kube-proxy [d49fc92a5ada] ...
	I1028 05:14:49.244599    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49fc92a5ada"
	I1028 05:14:49.257363    9360 logs.go:123] Gathering logs for kube-controller-manager [c04ce5b7e947] ...
	I1028 05:14:49.257374    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04ce5b7e947"
	I1028 05:14:49.275028    9360 logs.go:123] Gathering logs for storage-provisioner [a5e3ea6df78b] ...
	I1028 05:14:49.275041    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e3ea6df78b"
	I1028 05:14:49.286923    9360 logs.go:123] Gathering logs for dmesg ...
	I1028 05:14:49.286936    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:14:49.291441    9360 logs.go:123] Gathering logs for kube-apiserver [88ba9432ba34] ...
	I1028 05:14:49.291450    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ba9432ba34"
	I1028 05:14:49.305753    9360 logs.go:123] Gathering logs for coredns [47a579d7d206] ...
	I1028 05:14:49.305766    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a579d7d206"
	I1028 05:14:49.317629    9360 logs.go:123] Gathering logs for coredns [f9e74904e5af] ...
	I1028 05:14:49.317641    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9e74904e5af"
	I1028 05:14:49.329775    9360 logs.go:123] Gathering logs for kube-scheduler [614a27551ac7] ...
	I1028 05:14:49.329787    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 614a27551ac7"
	I1028 05:14:49.344684    9360 logs.go:123] Gathering logs for container status ...
	I1028 05:14:49.344695    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:14:49.356349    9360 logs.go:123] Gathering logs for kubelet ...
	I1028 05:14:49.356361    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:14:49.390384    9360 logs.go:123] Gathering logs for etcd [0a656fe11ed0] ...
	I1028 05:14:49.390397    9360 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a656fe11ed0"
	I1028 05:14:49.409728    9360 logs.go:123] Gathering logs for Docker ...
	I1028 05:14:49.409745    9360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:14:52.653461    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:52.653484    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:51.936962    9360 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:56.939170    9360 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:56.943617    9360 out.go:201] 
	W1028 05:14:56.946508    9360 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1028 05:14:56.946514    9360 out.go:270] * 
	W1028 05:14:56.946965    9360 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:14:56.958510    9360 out.go:201] 
	I1028 05:14:57.654084    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:57.654110    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:15:02.654915    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:15:02.654954    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1028 05:15:03.035367    9481 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1028 05:15:03.044644    9481 out.go:177] * Enabled addons: storage-provisioner
	I1028 05:15:03.051679    9481 addons.go:510] duration metric: took 30.518649291s for enable addons: enabled=[storage-provisioner]
	I1028 05:15:07.655966    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:15:07.656018    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-10-28 12:06:07 UTC, ends at Mon 2024-10-28 12:15:13 UTC. --
	Oct 28 12:14:57 running-upgrade-581000 cri-dockerd[3034]: time="2024-10-28T12:14:57Z" level=error msg="ContainerStats resp: {0x40008c8b00 linux}"
	Oct 28 12:14:57 running-upgrade-581000 cri-dockerd[3034]: time="2024-10-28T12:14:57Z" level=error msg="ContainerStats resp: {0x40008c8c40 linux}"
	Oct 28 12:14:57 running-upgrade-581000 cri-dockerd[3034]: time="2024-10-28T12:14:57Z" level=error msg="ContainerStats resp: {<nil> }"
	Oct 28 12:14:57 running-upgrade-581000 cri-dockerd[3034]: time="2024-10-28T12:14:57Z" level=error msg="Error response from daemon: No such container: f9e74904e5af869e2dad1207aa15fff74441308b514d110f8ad6f6353ddc464c Failed to get stats from container f9e74904e5af869e2dad1207aa15fff74441308b514d110f8ad6f6353ddc464c"
	Oct 28 12:14:58 running-upgrade-581000 cri-dockerd[3034]: time="2024-10-28T12:14:58Z" level=error msg="ContainerStats resp: {0x4000814e80 linux}"
	Oct 28 12:14:59 running-upgrade-581000 cri-dockerd[3034]: time="2024-10-28T12:14:59Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 28 12:14:59 running-upgrade-581000 cri-dockerd[3034]: time="2024-10-28T12:14:59Z" level=error msg="ContainerStats resp: {0x4000815800 linux}"
	Oct 28 12:14:59 running-upgrade-581000 cri-dockerd[3034]: time="2024-10-28T12:14:59Z" level=error msg="ContainerStats resp: {0x4000815c40 linux}"
	Oct 28 12:14:59 running-upgrade-581000 cri-dockerd[3034]: time="2024-10-28T12:14:59Z" level=error msg="ContainerStats resp: {0x40006b46c0 linux}"
	Oct 28 12:14:59 running-upgrade-581000 cri-dockerd[3034]: time="2024-10-28T12:14:59Z" level=error msg="ContainerStats resp: {0x40006ce180 linux}"
	Oct 28 12:14:59 running-upgrade-581000 cri-dockerd[3034]: time="2024-10-28T12:14:59Z" level=error msg="ContainerStats resp: {0x40006b5300 linux}"
	Oct 28 12:14:59 running-upgrade-581000 cri-dockerd[3034]: time="2024-10-28T12:14:59Z" level=error msg="ContainerStats resp: {0x40006b5740 linux}"
	Oct 28 12:14:59 running-upgrade-581000 cri-dockerd[3034]: time="2024-10-28T12:14:59Z" level=error msg="ContainerStats resp: {0x40006b5b40 linux}"
	Oct 28 12:15:04 running-upgrade-581000 cri-dockerd[3034]: time="2024-10-28T12:15:04Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 28 12:15:09 running-upgrade-581000 cri-dockerd[3034]: time="2024-10-28T12:15:09Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 28 12:15:10 running-upgrade-581000 cri-dockerd[3034]: time="2024-10-28T12:15:10Z" level=error msg="ContainerStats resp: {0x40008c8c80 linux}"
	Oct 28 12:15:10 running-upgrade-581000 cri-dockerd[3034]: time="2024-10-28T12:15:10Z" level=error msg="ContainerStats resp: {0x40008c8dc0 linux}"
	Oct 28 12:15:11 running-upgrade-581000 cri-dockerd[3034]: time="2024-10-28T12:15:11Z" level=error msg="ContainerStats resp: {0x4000814380 linux}"
	Oct 28 12:15:12 running-upgrade-581000 cri-dockerd[3034]: time="2024-10-28T12:15:12Z" level=error msg="ContainerStats resp: {0x4000814040 linux}"
	Oct 28 12:15:12 running-upgrade-581000 cri-dockerd[3034]: time="2024-10-28T12:15:12Z" level=error msg="ContainerStats resp: {0x4000814440 linux}"
	Oct 28 12:15:12 running-upgrade-581000 cri-dockerd[3034]: time="2024-10-28T12:15:12Z" level=error msg="ContainerStats resp: {0x4000234ac0 linux}"
	Oct 28 12:15:12 running-upgrade-581000 cri-dockerd[3034]: time="2024-10-28T12:15:12Z" level=error msg="ContainerStats resp: {0x4000814b40 linux}"
	Oct 28 12:15:12 running-upgrade-581000 cri-dockerd[3034]: time="2024-10-28T12:15:12Z" level=error msg="ContainerStats resp: {0x4000814d40 linux}"
	Oct 28 12:15:12 running-upgrade-581000 cri-dockerd[3034]: time="2024-10-28T12:15:12Z" level=error msg="ContainerStats resp: {0x40008154c0 linux}"
	Oct 28 12:15:12 running-upgrade-581000 cri-dockerd[3034]: time="2024-10-28T12:15:12Z" level=error msg="ContainerStats resp: {0x40006ce500 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	8bc7051dc0a35       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   3d3bfd8002f63
	8ae41ef5fb891       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   faf66659eb4eb
	3dd184e63b824       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   faf66659eb4eb
	64035de96af3c       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   3d3bfd8002f63
	d49fc92a5adab       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   83dcce67b358e
	a5e3ea6df78b4       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   f4450e63eb2ac
	614a27551ac77       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   69f52ada57b3c
	0a656fe11ed04       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   326663ae38798
	88ba9432ba34d       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   bbbca1059469a
	c04ce5b7e9470       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   95a831596b399
	
	
	==> coredns [3dd184e63b82] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5721685493156385465.5303617861856657373. HINFO: read udp 10.244.0.2:55371->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5721685493156385465.5303617861856657373. HINFO: read udp 10.244.0.2:42253->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5721685493156385465.5303617861856657373. HINFO: read udp 10.244.0.2:33921->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5721685493156385465.5303617861856657373. HINFO: read udp 10.244.0.2:47643->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5721685493156385465.5303617861856657373. HINFO: read udp 10.244.0.2:37781->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5721685493156385465.5303617861856657373. HINFO: read udp 10.244.0.2:60566->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5721685493156385465.5303617861856657373. HINFO: read udp 10.244.0.2:49666->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5721685493156385465.5303617861856657373. HINFO: read udp 10.244.0.2:56053->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5721685493156385465.5303617861856657373. HINFO: read udp 10.244.0.2:50834->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5721685493156385465.5303617861856657373. HINFO: read udp 10.244.0.2:35430->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [64035de96af3] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1276189237525740875.6329614585445786564. HINFO: read udp 10.244.0.3:59649->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1276189237525740875.6329614585445786564. HINFO: read udp 10.244.0.3:55401->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1276189237525740875.6329614585445786564. HINFO: read udp 10.244.0.3:58260->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1276189237525740875.6329614585445786564. HINFO: read udp 10.244.0.3:35022->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1276189237525740875.6329614585445786564. HINFO: read udp 10.244.0.3:59755->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1276189237525740875.6329614585445786564. HINFO: read udp 10.244.0.3:50639->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1276189237525740875.6329614585445786564. HINFO: read udp 10.244.0.3:54802->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1276189237525740875.6329614585445786564. HINFO: read udp 10.244.0.3:41185->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1276189237525740875.6329614585445786564. HINFO: read udp 10.244.0.3:47349->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1276189237525740875.6329614585445786564. HINFO: read udp 10.244.0.3:53676->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8ae41ef5fb89] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1699012542165751767.7276158918626690147. HINFO: read udp 10.244.0.2:40221->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1699012542165751767.7276158918626690147. HINFO: read udp 10.244.0.2:36596->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1699012542165751767.7276158918626690147. HINFO: read udp 10.244.0.2:36570->10.0.2.3:53: i/o timeout
	
	
	==> coredns [8bc7051dc0a3] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7203938363384002495.1917388749278856658. HINFO: read udp 10.244.0.3:57328->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7203938363384002495.1917388749278856658. HINFO: read udp 10.244.0.3:42857->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7203938363384002495.1917388749278856658. HINFO: read udp 10.244.0.3:42068->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-581000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-581000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=running-upgrade-581000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T05_10_56_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 12:10:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-581000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 12:15:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 12:10:56 +0000   Mon, 28 Oct 2024 12:10:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 12:10:56 +0000   Mon, 28 Oct 2024 12:10:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 12:10:56 +0000   Mon, 28 Oct 2024 12:10:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 12:10:56 +0000   Mon, 28 Oct 2024 12:10:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-581000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 4faac77f965442eaa43e0a031bc9cdfb
	  System UUID:                4faac77f965442eaa43e0a031bc9cdfb
	  Boot ID:                    976dea23-fc56-4293-839f-d0cd95af120c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-fskjs                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-qmv2x                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-581000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-581000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-581000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-lnn9c                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-581000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m2s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-581000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-581000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-581000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-581000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s   node-controller  Node running-upgrade-581000 event: Registered Node running-upgrade-581000 in Controller
	
	
	==> dmesg <==
	[  +1.659839] systemd-fstab-generator[875]: Ignoring "noauto" for root device
	[  +0.092490] systemd-fstab-generator[886]: Ignoring "noauto" for root device
	[  +0.082451] systemd-fstab-generator[897]: Ignoring "noauto" for root device
	[  +1.134411] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.091047] systemd-fstab-generator[1047]: Ignoring "noauto" for root device
	[  +0.082749] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[  +2.604792] systemd-fstab-generator[1286]: Ignoring "noauto" for root device
	[  +9.200751] systemd-fstab-generator[1912]: Ignoring "noauto" for root device
	[  +2.459278] systemd-fstab-generator[2186]: Ignoring "noauto" for root device
	[  +0.139955] systemd-fstab-generator[2222]: Ignoring "noauto" for root device
	[  +0.107218] systemd-fstab-generator[2233]: Ignoring "noauto" for root device
	[  +0.096450] systemd-fstab-generator[2249]: Ignoring "noauto" for root device
	[  +2.785760] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.179776] systemd-fstab-generator[2990]: Ignoring "noauto" for root device
	[  +0.084775] systemd-fstab-generator[3002]: Ignoring "noauto" for root device
	[  +0.088143] systemd-fstab-generator[3013]: Ignoring "noauto" for root device
	[  +0.105498] systemd-fstab-generator[3027]: Ignoring "noauto" for root device
	[  +2.384583] systemd-fstab-generator[3179]: Ignoring "noauto" for root device
	[  +2.523041] systemd-fstab-generator[3579]: Ignoring "noauto" for root device
	[  +1.262077] systemd-fstab-generator[3748]: Ignoring "noauto" for root device
	[Oct28 12:07] kauditd_printk_skb: 68 callbacks suppressed
	[Oct28 12:10] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.416157] systemd-fstab-generator[11950]: Ignoring "noauto" for root device
	[  +5.650848] systemd-fstab-generator[12554]: Ignoring "noauto" for root device
	[  +0.443479] systemd-fstab-generator[12686]: Ignoring "noauto" for root device
	
	
	==> etcd [0a656fe11ed0] <==
	{"level":"info","ts":"2024-10-28T12:10:51.394Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-10-28T12:10:51.395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-10-28T12:10:51.395Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-10-28T12:10:51.394Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-28T12:10:51.395Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-28T12:10:51.394Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-28T12:10:51.395Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-28T12:10:52.394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-28T12:10:52.394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-28T12:10:52.394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-10-28T12:10:52.394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-10-28T12:10:52.394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-28T12:10:52.394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-10-28T12:10:52.394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-28T12:10:52.395Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-581000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T12:10:52.395Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T12:10:52.395Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T12:10:52.396Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T12:10:52.396Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T12:10:52.396Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T12:10:52.396Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-10-28T12:10:52.396Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T12:10:52.396Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T12:10:52.396Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T12:10:52.397Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:15:13 up 9 min,  0 users,  load average: 0.33, 0.41, 0.25
	Linux running-upgrade-581000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [88ba9432ba34] <==
	I1028 12:10:53.643937       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1028 12:10:53.662752       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1028 12:10:53.662816       1 cache.go:39] Caches are synced for autoregister controller
	I1028 12:10:53.663033       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1028 12:10:53.664260       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1028 12:10:53.701225       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1028 12:10:53.728962       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1028 12:10:54.390723       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1028 12:10:54.571007       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1028 12:10:54.576850       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1028 12:10:54.576862       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1028 12:10:54.712581       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1028 12:10:54.724451       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1028 12:10:54.743031       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1028 12:10:54.745225       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I1028 12:10:54.745612       1 controller.go:611] quota admission added evaluator for: endpoints
	I1028 12:10:54.746991       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1028 12:10:55.704048       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1028 12:10:56.088426       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1028 12:10:56.093579       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1028 12:10:56.098272       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1028 12:10:56.152425       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1028 12:11:09.437091       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1028 12:11:09.468700       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1028 12:11:10.856928       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [c04ce5b7e947] <==
	I1028 12:11:09.461525       1 shared_informer.go:262] Caches are synced for namespace
	I1028 12:11:09.469298       1 shared_informer.go:262] Caches are synced for node
	I1028 12:11:09.469315       1 range_allocator.go:173] Starting range CIDR allocator
	I1028 12:11:09.469317       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1028 12:11:09.469320       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1028 12:11:09.470860       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I1028 12:11:09.473752       1 range_allocator.go:374] Set node running-upgrade-581000 PodCIDR to [10.244.0.0/24]
	I1028 12:11:09.476270       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1028 12:11:09.487451       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-fskjs"
	I1028 12:11:09.500107       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-qmv2x"
	I1028 12:11:09.505398       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I1028 12:11:09.553648       1 shared_informer.go:262] Caches are synced for attach detach
	I1028 12:11:09.553758       1 shared_informer.go:262] Caches are synced for PVC protection
	I1028 12:11:09.577154       1 shared_informer.go:262] Caches are synced for ephemeral
	I1028 12:11:09.604422       1 shared_informer.go:262] Caches are synced for stateful set
	I1028 12:11:09.605517       1 shared_informer.go:262] Caches are synced for persistent volume
	I1028 12:11:09.629139       1 shared_informer.go:262] Caches are synced for expand
	I1028 12:11:09.675993       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I1028 12:11:09.681475       1 shared_informer.go:262] Caches are synced for disruption
	I1028 12:11:09.681481       1 disruption.go:371] Sending events to api server.
	I1028 12:11:09.682692       1 shared_informer.go:262] Caches are synced for resource quota
	I1028 12:11:09.729087       1 shared_informer.go:262] Caches are synced for resource quota
	I1028 12:11:10.097525       1 shared_informer.go:262] Caches are synced for garbage collector
	I1028 12:11:10.152803       1 shared_informer.go:262] Caches are synced for garbage collector
	I1028 12:11:10.152815       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [d49fc92a5ada] <==
	I1028 12:11:10.836358       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I1028 12:11:10.836397       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I1028 12:11:10.836406       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1028 12:11:10.855304       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1028 12:11:10.855316       1 server_others.go:206] "Using iptables Proxier"
	I1028 12:11:10.855331       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1028 12:11:10.855434       1 server.go:661] "Version info" version="v1.24.1"
	I1028 12:11:10.855441       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 12:11:10.855923       1 config.go:317] "Starting service config controller"
	I1028 12:11:10.855928       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1028 12:11:10.855938       1 config.go:226] "Starting endpoint slice config controller"
	I1028 12:11:10.855939       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1028 12:11:10.856140       1 config.go:444] "Starting node config controller"
	I1028 12:11:10.856142       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1028 12:11:10.956588       1 shared_informer.go:262] Caches are synced for node config
	I1028 12:11:10.956604       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1028 12:11:10.956604       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [614a27551ac7] <==
	W1028 12:10:53.633588       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 12:10:53.633595       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1028 12:10:53.633642       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 12:10:53.633649       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1028 12:10:53.633662       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1028 12:10:53.633775       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1028 12:10:53.633924       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 12:10:53.633933       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 12:10:53.633949       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1028 12:10:53.633953       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1028 12:10:53.633967       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1028 12:10:53.633970       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1028 12:10:53.634002       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 12:10:53.634038       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1028 12:10:53.634273       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 12:10:53.634300       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1028 12:10:54.554118       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 12:10:54.554484       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1028 12:10:54.602831       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 12:10:54.602850       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 12:10:54.648673       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1028 12:10:54.648775       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1028 12:10:54.673353       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 12:10:54.673369       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1028 12:10:57.230592       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-10-28 12:06:07 UTC, ends at Mon 2024-10-28 12:15:13 UTC. --
	Oct 28 12:10:56 running-upgrade-581000 kubelet[12560]: I1028 12:10:56.343276   12560 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f8b6c7bc2ecdd502e5ac88815b7d667-kubeconfig\") pod \"kube-controller-manager-running-upgrade-581000\" (UID: \"2f8b6c7bc2ecdd502e5ac88815b7d667\") " pod="kube-system/kube-controller-manager-running-upgrade-581000"
	Oct 28 12:10:56 running-upgrade-581000 kubelet[12560]: I1028 12:10:56.344689   12560 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/6c36ab1ebba3dabb5f0532e51b732902-etcd-certs\") pod \"etcd-running-upgrade-581000\" (UID: \"6c36ab1ebba3dabb5f0532e51b732902\") " pod="kube-system/etcd-running-upgrade-581000"
	Oct 28 12:10:56 running-upgrade-581000 kubelet[12560]: I1028 12:10:56.345227   12560 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/59cd7a30c033d5bc3ce3f5ffe33664ec-ca-certs\") pod \"kube-apiserver-running-upgrade-581000\" (UID: \"59cd7a30c033d5bc3ce3f5ffe33664ec\") " pod="kube-system/kube-apiserver-running-upgrade-581000"
	Oct 28 12:10:56 running-upgrade-581000 kubelet[12560]: I1028 12:10:56.345361   12560 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f8b6c7bc2ecdd502e5ac88815b7d667-ca-certs\") pod \"kube-controller-manager-running-upgrade-581000\" (UID: \"2f8b6c7bc2ecdd502e5ac88815b7d667\") " pod="kube-system/kube-controller-manager-running-upgrade-581000"
	Oct 28 12:10:56 running-upgrade-581000 kubelet[12560]: I1028 12:10:56.345371   12560 reconciler.go:157] "Reconciler: start to sync state"
	Oct 28 12:10:57 running-upgrade-581000 kubelet[12560]: E1028 12:10:57.522107   12560 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-581000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-581000"
	Oct 28 12:11:09 running-upgrade-581000 kubelet[12560]: I1028 12:11:09.441775   12560 topology_manager.go:200] "Topology Admit Handler"
	Oct 28 12:11:09 running-upgrade-581000 kubelet[12560]: I1028 12:11:09.474108   12560 topology_manager.go:200] "Topology Admit Handler"
	Oct 28 12:11:09 running-upgrade-581000 kubelet[12560]: I1028 12:11:09.490298   12560 topology_manager.go:200] "Topology Admit Handler"
	Oct 28 12:11:09 running-upgrade-581000 kubelet[12560]: I1028 12:11:09.504972   12560 topology_manager.go:200] "Topology Admit Handler"
	Oct 28 12:11:09 running-upgrade-581000 kubelet[12560]: I1028 12:11:09.527030   12560 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 28 12:11:09 running-upgrade-581000 kubelet[12560]: I1028 12:11:09.527603   12560 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk9vg\" (UniqueName: \"kubernetes.io/projected/37abcd8b-8c7c-4c52-be60-1ee3946b3a45-kube-api-access-wk9vg\") pod \"storage-provisioner\" (UID: \"37abcd8b-8c7c-4c52-be60-1ee3946b3a45\") " pod="kube-system/storage-provisioner"
	Oct 28 12:11:09 running-upgrade-581000 kubelet[12560]: I1028 12:11:09.527663   12560 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8gnk\" (UniqueName: \"kubernetes.io/projected/714d34ec-ade9-4adc-82de-62bc41204c09-kube-api-access-h8gnk\") pod \"coredns-6d4b75cb6d-qmv2x\" (UID: \"714d34ec-ade9-4adc-82de-62bc41204c09\") " pod="kube-system/coredns-6d4b75cb6d-qmv2x"
	Oct 28 12:11:09 running-upgrade-581000 kubelet[12560]: I1028 12:11:09.527745   12560 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnvpv\" (UniqueName: \"kubernetes.io/projected/c9cc306f-8dfd-416b-aa1f-3f71b4dcf7df-kube-api-access-rnvpv\") pod \"kube-proxy-lnn9c\" (UID: \"c9cc306f-8dfd-416b-aa1f-3f71b4dcf7df\") " pod="kube-system/kube-proxy-lnn9c"
	Oct 28 12:11:09 running-upgrade-581000 kubelet[12560]: I1028 12:11:09.527762   12560 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/714d34ec-ade9-4adc-82de-62bc41204c09-config-volume\") pod \"coredns-6d4b75cb6d-qmv2x\" (UID: \"714d34ec-ade9-4adc-82de-62bc41204c09\") " pod="kube-system/coredns-6d4b75cb6d-qmv2x"
	Oct 28 12:11:09 running-upgrade-581000 kubelet[12560]: I1028 12:11:09.527773   12560 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d39536cf-f0dc-46e2-911b-7a50da42c6e7-config-volume\") pod \"coredns-6d4b75cb6d-fskjs\" (UID: \"d39536cf-f0dc-46e2-911b-7a50da42c6e7\") " pod="kube-system/coredns-6d4b75cb6d-fskjs"
	Oct 28 12:11:09 running-upgrade-581000 kubelet[12560]: I1028 12:11:09.527783   12560 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2lg4\" (UniqueName: \"kubernetes.io/projected/d39536cf-f0dc-46e2-911b-7a50da42c6e7-kube-api-access-n2lg4\") pod \"coredns-6d4b75cb6d-fskjs\" (UID: \"d39536cf-f0dc-46e2-911b-7a50da42c6e7\") " pod="kube-system/coredns-6d4b75cb6d-fskjs"
	Oct 28 12:11:09 running-upgrade-581000 kubelet[12560]: I1028 12:11:09.527792   12560 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/37abcd8b-8c7c-4c52-be60-1ee3946b3a45-tmp\") pod \"storage-provisioner\" (UID: \"37abcd8b-8c7c-4c52-be60-1ee3946b3a45\") " pod="kube-system/storage-provisioner"
	Oct 28 12:11:09 running-upgrade-581000 kubelet[12560]: I1028 12:11:09.527801   12560 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c9cc306f-8dfd-416b-aa1f-3f71b4dcf7df-kube-proxy\") pod \"kube-proxy-lnn9c\" (UID: \"c9cc306f-8dfd-416b-aa1f-3f71b4dcf7df\") " pod="kube-system/kube-proxy-lnn9c"
	Oct 28 12:11:09 running-upgrade-581000 kubelet[12560]: I1028 12:11:09.527811   12560 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9cc306f-8dfd-416b-aa1f-3f71b4dcf7df-xtables-lock\") pod \"kube-proxy-lnn9c\" (UID: \"c9cc306f-8dfd-416b-aa1f-3f71b4dcf7df\") " pod="kube-system/kube-proxy-lnn9c"
	Oct 28 12:11:09 running-upgrade-581000 kubelet[12560]: I1028 12:11:09.527820   12560 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9cc306f-8dfd-416b-aa1f-3f71b4dcf7df-lib-modules\") pod \"kube-proxy-lnn9c\" (UID: \"c9cc306f-8dfd-416b-aa1f-3f71b4dcf7df\") " pod="kube-system/kube-proxy-lnn9c"
	Oct 28 12:11:09 running-upgrade-581000 kubelet[12560]: I1028 12:11:09.527930   12560 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 28 12:11:10 running-upgrade-581000 kubelet[12560]: I1028 12:11:10.306083   12560 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="faf66659eb4ebf8f9c845e2b7630060a847571d5195462dffe9880c5f278069f"
	Oct 28 12:14:57 running-upgrade-581000 kubelet[12560]: I1028 12:14:57.905353   12560 scope.go:110] "RemoveContainer" containerID="f9e74904e5af869e2dad1207aa15fff74441308b514d110f8ad6f6353ddc464c"
	Oct 28 12:14:57 running-upgrade-581000 kubelet[12560]: I1028 12:14:57.924182   12560 scope.go:110] "RemoveContainer" containerID="47a579d7d2064c2aef1445e9654a459bc2cb7e212a3840f000f058305e26e8bf"
	
	
	==> storage-provisioner [a5e3ea6df78b] <==
	I1028 12:11:10.305904       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 12:11:10.317308       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 12:11:10.317328       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 12:11:10.321115       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 12:11:10.321286       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-581000_68d68ea4-5372-4d94-a744-5b2393d6ca3e!
	I1028 12:11:10.322293       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9114bb88-6e54-49ef-89c8-140043b72b0e", APIVersion:"v1", ResourceVersion:"359", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-581000_68d68ea4-5372-4d94-a744-5b2393d6ca3e became leader
	I1028 12:11:10.421507       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-581000_68d68ea4-5372-4d94-a744-5b2393d6ca3e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-581000 -n running-upgrade-581000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-581000 -n running-upgrade-581000: exit status 2 (15.637380958s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-581000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-581000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-581000
--- FAIL: TestRunningBinaryUpgrade (586.47s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-845000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-845000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.833099959s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-845000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-845000" primary control-plane node in "kubernetes-upgrade-845000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-845000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:08:42.921608    9417 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:08:42.921767    9417 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:08:42.921771    9417 out.go:358] Setting ErrFile to fd 2...
	I1028 05:08:42.921773    9417 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:08:42.921902    9417 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:08:42.923064    9417 out.go:352] Setting JSON to false
	I1028 05:08:42.940728    9417 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5893,"bootTime":1730111429,"procs":518,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:08:42.940800    9417 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:08:42.946952    9417 out.go:177] * [kubernetes-upgrade-845000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:08:42.954979    9417 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:08:42.955049    9417 notify.go:220] Checking for updates...
	I1028 05:08:42.959304    9417 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:08:42.961939    9417 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:08:42.964955    9417 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:08:42.967975    9417 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:08:42.970933    9417 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:08:42.974251    9417 config.go:182] Loaded profile config "multinode-268000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:08:42.974321    9417 config.go:182] Loaded profile config "running-upgrade-581000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 05:08:42.974364    9417 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:08:42.978975    9417 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 05:08:42.985965    9417 start.go:297] selected driver: qemu2
	I1028 05:08:42.985972    9417 start.go:901] validating driver "qemu2" against <nil>
	I1028 05:08:42.985979    9417 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:08:42.988385    9417 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 05:08:42.991970    9417 out.go:177] * Automatically selected the socket_vmnet network
	I1028 05:08:42.995055    9417 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 05:08:42.995079    9417 cni.go:84] Creating CNI manager for ""
	I1028 05:08:42.995101    9417 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1028 05:08:42.995132    9417 start.go:340] cluster config:
	{Name:kubernetes-upgrade-845000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-845000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:08:42.999448    9417 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:08:43.006881    9417 out.go:177] * Starting "kubernetes-upgrade-845000" primary control-plane node in "kubernetes-upgrade-845000" cluster
	I1028 05:08:43.010952    9417 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1028 05:08:43.010969    9417 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1028 05:08:43.010980    9417 cache.go:56] Caching tarball of preloaded images
	I1028 05:08:43.011056    9417 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:08:43.011067    9417 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1028 05:08:43.011123    9417 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/kubernetes-upgrade-845000/config.json ...
	I1028 05:08:43.011138    9417 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/kubernetes-upgrade-845000/config.json: {Name:mke3e8cd000dc2ae3ba1a2c91cfae05f4ccee28f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:08:43.011442    9417 start.go:360] acquireMachinesLock for kubernetes-upgrade-845000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:08:43.011495    9417 start.go:364] duration metric: took 36.5µs to acquireMachinesLock for "kubernetes-upgrade-845000"
	I1028 05:08:43.011505    9417 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-845000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-845000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:08:43.011526    9417 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:08:43.018960    9417 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 05:08:43.045693    9417 start.go:159] libmachine.API.Create for "kubernetes-upgrade-845000" (driver="qemu2")
	I1028 05:08:43.045723    9417 client.go:168] LocalClient.Create starting
	I1028 05:08:43.045810    9417 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:08:43.045854    9417 main.go:141] libmachine: Decoding PEM data...
	I1028 05:08:43.045865    9417 main.go:141] libmachine: Parsing certificate...
	I1028 05:08:43.045907    9417 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:08:43.045936    9417 main.go:141] libmachine: Decoding PEM data...
	I1028 05:08:43.045943    9417 main.go:141] libmachine: Parsing certificate...
	I1028 05:08:43.046365    9417 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:08:43.214370    9417 main.go:141] libmachine: Creating SSH key...
	I1028 05:08:43.348168    9417 main.go:141] libmachine: Creating Disk image...
	I1028 05:08:43.348176    9417 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:08:43.348383    9417 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/disk.qcow2
	I1028 05:08:43.360667    9417 main.go:141] libmachine: STDOUT: 
	I1028 05:08:43.360694    9417 main.go:141] libmachine: STDERR: 
	I1028 05:08:43.360759    9417 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/disk.qcow2 +20000M
	I1028 05:08:43.369226    9417 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:08:43.369242    9417 main.go:141] libmachine: STDERR: 
	I1028 05:08:43.369259    9417 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/disk.qcow2
	I1028 05:08:43.369268    9417 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:08:43.369279    9417 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:08:43.369311    9417 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:1e:82:a2:8f:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/disk.qcow2
	I1028 05:08:43.371105    9417 main.go:141] libmachine: STDOUT: 
	I1028 05:08:43.371122    9417 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:08:43.371144    9417 client.go:171] duration metric: took 325.42175ms to LocalClient.Create
	I1028 05:08:45.373212    9417 start.go:128] duration metric: took 2.361717083s to createHost
	I1028 05:08:45.373256    9417 start.go:83] releasing machines lock for "kubernetes-upgrade-845000", held for 2.3618075s
	W1028 05:08:45.373301    9417 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:08:45.385365    9417 out.go:177] * Deleting "kubernetes-upgrade-845000" in qemu2 ...
	W1028 05:08:45.405826    9417 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:08:45.405844    9417 start.go:729] Will try again in 5 seconds ...
	I1028 05:08:50.407939    9417 start.go:360] acquireMachinesLock for kubernetes-upgrade-845000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:08:50.408221    9417 start.go:364] duration metric: took 242.917µs to acquireMachinesLock for "kubernetes-upgrade-845000"
	I1028 05:08:50.408253    9417 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-845000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-845000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:08:50.408396    9417 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:08:50.416802    9417 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 05:08:50.446653    9417 start.go:159] libmachine.API.Create for "kubernetes-upgrade-845000" (driver="qemu2")
	I1028 05:08:50.446702    9417 client.go:168] LocalClient.Create starting
	I1028 05:08:50.446814    9417 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:08:50.446887    9417 main.go:141] libmachine: Decoding PEM data...
	I1028 05:08:50.446897    9417 main.go:141] libmachine: Parsing certificate...
	I1028 05:08:50.446952    9417 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:08:50.446996    9417 main.go:141] libmachine: Decoding PEM data...
	I1028 05:08:50.447004    9417 main.go:141] libmachine: Parsing certificate...
	I1028 05:08:50.447561    9417 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:08:50.609314    9417 main.go:141] libmachine: Creating SSH key...
	I1028 05:08:50.655230    9417 main.go:141] libmachine: Creating Disk image...
	I1028 05:08:50.655236    9417 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:08:50.655442    9417 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/disk.qcow2
	I1028 05:08:50.665426    9417 main.go:141] libmachine: STDOUT: 
	I1028 05:08:50.665447    9417 main.go:141] libmachine: STDERR: 
	I1028 05:08:50.665507    9417 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/disk.qcow2 +20000M
	I1028 05:08:50.674265    9417 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:08:50.674282    9417 main.go:141] libmachine: STDERR: 
	I1028 05:08:50.674295    9417 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/disk.qcow2
	I1028 05:08:50.674300    9417 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:08:50.674307    9417 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:08:50.674338    9417 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:db:ba:eb:c9:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/disk.qcow2
	I1028 05:08:50.676137    9417 main.go:141] libmachine: STDOUT: 
	I1028 05:08:50.676153    9417 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:08:50.676172    9417 client.go:171] duration metric: took 229.470125ms to LocalClient.Create
	I1028 05:08:52.678339    9417 start.go:128] duration metric: took 2.269958958s to createHost
	I1028 05:08:52.678413    9417 start.go:83] releasing machines lock for "kubernetes-upgrade-845000", held for 2.270226208s
	W1028 05:08:52.678798    9417 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-845000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-845000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:08:52.692430    9417 out.go:201] 
	W1028 05:08:52.695759    9417 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:08:52.695806    9417 out.go:270] * 
	* 
	W1028 05:08:52.698666    9417 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:08:52.707500    9417 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-845000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-845000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-845000: (1.895312916s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-845000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-845000 status --format={{.Host}}: exit status 7 (69.989ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-845000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-845000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.188397125s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-845000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-845000" primary control-plane node in "kubernetes-upgrade-845000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-845000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-845000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:08:54.724759    9444 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:08:54.724916    9444 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:08:54.724922    9444 out.go:358] Setting ErrFile to fd 2...
	I1028 05:08:54.724924    9444 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:08:54.725061    9444 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:08:54.726141    9444 out.go:352] Setting JSON to false
	I1028 05:08:54.745016    9444 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5905,"bootTime":1730111429,"procs":519,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:08:54.745099    9444 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:08:54.750157    9444 out.go:177] * [kubernetes-upgrade-845000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:08:54.759137    9444 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:08:54.759223    9444 notify.go:220] Checking for updates...
	I1028 05:08:54.766090    9444 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:08:54.770081    9444 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:08:54.773178    9444 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:08:54.776129    9444 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:08:54.779093    9444 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:08:54.782429    9444 config.go:182] Loaded profile config "kubernetes-upgrade-845000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1028 05:08:54.782694    9444 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:08:54.787094    9444 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 05:08:54.794132    9444 start.go:297] selected driver: qemu2
	I1028 05:08:54.794138    9444 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-845000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-845000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:08:54.794227    9444 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:08:54.796866    9444 cni.go:84] Creating CNI manager for ""
	I1028 05:08:54.796954    9444 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 05:08:54.796973    9444 start.go:340] cluster config:
	{Name:kubernetes-upgrade-845000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-845000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:08:54.801157    9444 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:08:54.809092    9444 out.go:177] * Starting "kubernetes-upgrade-845000" primary control-plane node in "kubernetes-upgrade-845000" cluster
	I1028 05:08:54.812949    9444 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:08:54.812972    9444 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 05:08:54.812978    9444 cache.go:56] Caching tarball of preloaded images
	I1028 05:08:54.813043    9444 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:08:54.813049    9444 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 05:08:54.813093    9444 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/kubernetes-upgrade-845000/config.json ...
	I1028 05:08:54.813452    9444 start.go:360] acquireMachinesLock for kubernetes-upgrade-845000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:08:54.813479    9444 start.go:364] duration metric: took 22.208µs to acquireMachinesLock for "kubernetes-upgrade-845000"
	I1028 05:08:54.813487    9444 start.go:96] Skipping create...Using existing machine configuration
	I1028 05:08:54.813492    9444 fix.go:54] fixHost starting: 
	I1028 05:08:54.813601    9444 fix.go:112] recreateIfNeeded on kubernetes-upgrade-845000: state=Stopped err=<nil>
	W1028 05:08:54.813609    9444 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 05:08:54.817224    9444 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-845000" ...
	I1028 05:08:54.825121    9444 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:08:54.825165    9444 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:db:ba:eb:c9:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/disk.qcow2
	I1028 05:08:54.827196    9444 main.go:141] libmachine: STDOUT: 
	I1028 05:08:54.827213    9444 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:08:54.827245    9444 fix.go:56] duration metric: took 13.752042ms for fixHost
	I1028 05:08:54.827250    9444 start.go:83] releasing machines lock for "kubernetes-upgrade-845000", held for 13.766959ms
	W1028 05:08:54.827256    9444 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:08:54.827287    9444 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:08:54.827290    9444 start.go:729] Will try again in 5 seconds ...
	I1028 05:08:59.829410    9444 start.go:360] acquireMachinesLock for kubernetes-upgrade-845000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:08:59.829698    9444 start.go:364] duration metric: took 227.917µs to acquireMachinesLock for "kubernetes-upgrade-845000"
	I1028 05:08:59.829776    9444 start.go:96] Skipping create...Using existing machine configuration
	I1028 05:08:59.829789    9444 fix.go:54] fixHost starting: 
	I1028 05:08:59.830184    9444 fix.go:112] recreateIfNeeded on kubernetes-upgrade-845000: state=Stopped err=<nil>
	W1028 05:08:59.830202    9444 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 05:08:59.840546    9444 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-845000" ...
	I1028 05:08:59.844542    9444 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:08:59.844656    9444 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:db:ba:eb:c9:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubernetes-upgrade-845000/disk.qcow2
	I1028 05:08:59.849169    9444 main.go:141] libmachine: STDOUT: 
	I1028 05:08:59.849196    9444 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:08:59.849240    9444 fix.go:56] duration metric: took 19.452208ms for fixHost
	I1028 05:08:59.849251    9444 start.go:83] releasing machines lock for "kubernetes-upgrade-845000", held for 19.539167ms
	W1028 05:08:59.849334    9444 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-845000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-845000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:08:59.856468    9444 out.go:201] 
	W1028 05:08:59.859502    9444 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:08:59.859512    9444 out.go:270] * 
	* 
	W1028 05:08:59.860256    9444 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:08:59.870384    9444 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-845000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-845000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-845000 version --output=json: exit status 1 (30.865417ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-845000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-10-28 05:08:59.910212 -0700 PDT m=+899.353496001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-845000 -n kubernetes-upgrade-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-845000 -n kubernetes-upgrade-845000: exit status 7 (34.448625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-845000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-845000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-845000
--- FAIL: TestKubernetesUpgrade (17.14s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.03s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19875
- KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current725003336/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.03s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.98s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19875
- KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2119330997/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (572.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1023169040 start -p stopped-upgrade-451000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1023169040 start -p stopped-upgrade-451000 --memory=2200 --vm-driver=qemu2 : (40.451231625s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1023169040 -p stopped-upgrade-451000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1023169040 -p stopped-upgrade-451000 stop: (12.101420417s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-451000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-451000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m40.033703958s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-451000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-451000" primary control-plane node in "stopped-upgrade-451000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-451000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:09:53.650599    9481 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:09:53.650806    9481 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:09:53.650809    9481 out.go:358] Setting ErrFile to fd 2...
	I1028 05:09:53.650812    9481 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:09:53.650939    9481 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:09:53.652033    9481 out.go:352] Setting JSON to false
	I1028 05:09:53.670967    9481 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5964,"bootTime":1730111429,"procs":518,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:09:53.671041    9481 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:09:53.675662    9481 out.go:177] * [stopped-upgrade-451000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:09:53.683627    9481 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:09:53.683681    9481 notify.go:220] Checking for updates...
	I1028 05:09:53.689639    9481 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:09:53.692635    9481 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:09:53.695602    9481 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:09:53.698688    9481 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:09:53.701641    9481 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:09:53.704848    9481 config.go:182] Loaded profile config "stopped-upgrade-451000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 05:09:53.707646    9481 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1028 05:09:53.710526    9481 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:09:53.714624    9481 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 05:09:53.721527    9481 start.go:297] selected driver: qemu2
	I1028 05:09:53.721532    9481 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-451000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:58252 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-451000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1028 05:09:53.721579    9481 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:09:53.724347    9481 cni.go:84] Creating CNI manager for ""
	I1028 05:09:53.724380    9481 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 05:09:53.724411    9481 start.go:340] cluster config:
	{Name:stopped-upgrade-451000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:58252 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-451000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1028 05:09:53.724469    9481 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:09:53.731564    9481 out.go:177] * Starting "stopped-upgrade-451000" primary control-plane node in "stopped-upgrade-451000" cluster
	I1028 05:09:53.735597    9481 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1028 05:09:53.735615    9481 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1028 05:09:53.735622    9481 cache.go:56] Caching tarball of preloaded images
	I1028 05:09:53.735680    9481 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:09:53.735689    9481 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1028 05:09:53.735743    9481 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/config.json ...
	I1028 05:09:53.736158    9481 start.go:360] acquireMachinesLock for stopped-upgrade-451000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:09:53.736190    9481 start.go:364] duration metric: took 25.334µs to acquireMachinesLock for "stopped-upgrade-451000"
	I1028 05:09:53.736198    9481 start.go:96] Skipping create...Using existing machine configuration
	I1028 05:09:53.736203    9481 fix.go:54] fixHost starting: 
	I1028 05:09:53.736316    9481 fix.go:112] recreateIfNeeded on stopped-upgrade-451000: state=Stopped err=<nil>
	W1028 05:09:53.736325    9481 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 05:09:53.739545    9481 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-451000" ...
	I1028 05:09:53.747615    9481 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:09:53.747710    9481 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/stopped-upgrade-451000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/stopped-upgrade-451000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/stopped-upgrade-451000/qemu.pid -nic user,model=virtio,hostfwd=tcp::58218-:22,hostfwd=tcp::58219-:2376,hostname=stopped-upgrade-451000 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/stopped-upgrade-451000/disk.qcow2
	I1028 05:09:53.796528    9481 main.go:141] libmachine: STDOUT: 
	I1028 05:09:53.796568    9481 main.go:141] libmachine: STDERR: 
	I1028 05:09:53.796575    9481 main.go:141] libmachine: Waiting for VM to start (ssh -p 58218 docker@127.0.0.1)...
	I1028 05:10:13.699092    9481 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/config.json ...
	I1028 05:10:13.699989    9481 machine.go:93] provisionDockerMachine start ...
	I1028 05:10:13.700237    9481 main.go:141] libmachine: Using SSH client type: native
	I1028 05:10:13.700787    9481 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030165f0] 0x103018e30 <nil>  [] 0s} localhost 58218 <nil> <nil>}
	I1028 05:10:13.700803    9481 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 05:10:13.787533    9481 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 05:10:13.787573    9481 buildroot.go:166] provisioning hostname "stopped-upgrade-451000"
	I1028 05:10:13.787721    9481 main.go:141] libmachine: Using SSH client type: native
	I1028 05:10:13.788012    9481 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030165f0] 0x103018e30 <nil>  [] 0s} localhost 58218 <nil> <nil>}
	I1028 05:10:13.788026    9481 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-451000 && echo "stopped-upgrade-451000" | sudo tee /etc/hostname
	I1028 05:10:13.865332    9481 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-451000
	
	I1028 05:10:13.865448    9481 main.go:141] libmachine: Using SSH client type: native
	I1028 05:10:13.865653    9481 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030165f0] 0x103018e30 <nil>  [] 0s} localhost 58218 <nil> <nil>}
	I1028 05:10:13.865667    9481 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-451000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-451000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-451000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 05:10:13.932635    9481 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 05:10:13.932650    9481 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19875-6942/.minikube CaCertPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19875-6942/.minikube}
	I1028 05:10:13.932669    9481 buildroot.go:174] setting up certificates
	I1028 05:10:13.932676    9481 provision.go:84] configureAuth start
	I1028 05:10:13.932684    9481 provision.go:143] copyHostCerts
	I1028 05:10:13.932753    9481 exec_runner.go:144] found /Users/jenkins/minikube-integration/19875-6942/.minikube/ca.pem, removing ...
	I1028 05:10:13.932759    9481 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19875-6942/.minikube/ca.pem
	I1028 05:10:13.932854    9481 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19875-6942/.minikube/ca.pem (1082 bytes)
	I1028 05:10:13.933040    9481 exec_runner.go:144] found /Users/jenkins/minikube-integration/19875-6942/.minikube/cert.pem, removing ...
	I1028 05:10:13.933044    9481 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19875-6942/.minikube/cert.pem
	I1028 05:10:13.933088    9481 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19875-6942/.minikube/cert.pem (1123 bytes)
	I1028 05:10:13.933208    9481 exec_runner.go:144] found /Users/jenkins/minikube-integration/19875-6942/.minikube/key.pem, removing ...
	I1028 05:10:13.933212    9481 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19875-6942/.minikube/key.pem
	I1028 05:10:13.933258    9481 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19875-6942/.minikube/key.pem (1675 bytes)
	I1028 05:10:13.933353    9481 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-451000 san=[127.0.0.1 localhost minikube stopped-upgrade-451000]
	I1028 05:10:14.001064    9481 provision.go:177] copyRemoteCerts
	I1028 05:10:14.001110    9481 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 05:10:14.001118    9481 sshutil.go:53] new ssh client: &{IP:localhost Port:58218 SSHKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/stopped-upgrade-451000/id_rsa Username:docker}
	I1028 05:10:14.033554    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 05:10:14.040295    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1028 05:10:14.047699    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 05:10:14.054715    9481 provision.go:87] duration metric: took 122.033084ms to configureAuth
	I1028 05:10:14.054724    9481 buildroot.go:189] setting minikube options for container-runtime
	I1028 05:10:14.054837    9481 config.go:182] Loaded profile config "stopped-upgrade-451000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 05:10:14.054887    9481 main.go:141] libmachine: Using SSH client type: native
	I1028 05:10:14.054979    9481 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030165f0] 0x103018e30 <nil>  [] 0s} localhost 58218 <nil> <nil>}
	I1028 05:10:14.054983    9481 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1028 05:10:14.110216    9481 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1028 05:10:14.110224    9481 buildroot.go:70] root file system type: tmpfs
	I1028 05:10:14.110274    9481 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1028 05:10:14.110331    9481 main.go:141] libmachine: Using SSH client type: native
	I1028 05:10:14.110443    9481 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030165f0] 0x103018e30 <nil>  [] 0s} localhost 58218 <nil> <nil>}
	I1028 05:10:14.110476    9481 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1028 05:10:14.172701    9481 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1028 05:10:14.172771    9481 main.go:141] libmachine: Using SSH client type: native
	I1028 05:10:14.172884    9481 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030165f0] 0x103018e30 <nil>  [] 0s} localhost 58218 <nil> <nil>}
	I1028 05:10:14.172892    9481 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1028 05:10:14.534475    9481 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1028 05:10:14.534491    9481 machine.go:96] duration metric: took 834.508625ms to provisionDockerMachine
	I1028 05:10:14.534499    9481 start.go:293] postStartSetup for "stopped-upgrade-451000" (driver="qemu2")
	I1028 05:10:14.534506    9481 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 05:10:14.534585    9481 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 05:10:14.534597    9481 sshutil.go:53] new ssh client: &{IP:localhost Port:58218 SSHKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/stopped-upgrade-451000/id_rsa Username:docker}
	I1028 05:10:14.567578    9481 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 05:10:14.568834    9481 info.go:137] Remote host: Buildroot 2021.02.12
	I1028 05:10:14.568841    9481 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19875-6942/.minikube/addons for local assets ...
	I1028 05:10:14.568911    9481 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19875-6942/.minikube/files for local assets ...
	I1028 05:10:14.568997    9481 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19875-6942/.minikube/files/etc/ssl/certs/74522.pem -> 74522.pem in /etc/ssl/certs
	I1028 05:10:14.569103    9481 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 05:10:14.572155    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/files/etc/ssl/certs/74522.pem --> /etc/ssl/certs/74522.pem (1708 bytes)
	I1028 05:10:14.579561    9481 start.go:296] duration metric: took 45.057917ms for postStartSetup
	I1028 05:10:14.579577    9481 fix.go:56] duration metric: took 20.843831084s for fixHost
	I1028 05:10:14.579626    9481 main.go:141] libmachine: Using SSH client type: native
	I1028 05:10:14.579728    9481 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030165f0] 0x103018e30 <nil>  [] 0s} localhost 58218 <nil> <nil>}
	I1028 05:10:14.579732    9481 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 05:10:14.634636    9481 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117415.009479629
	
	I1028 05:10:14.634643    9481 fix.go:216] guest clock: 1730117415.009479629
	I1028 05:10:14.634647    9481 fix.go:229] Guest: 2024-10-28 05:10:15.009479629 -0700 PDT Remote: 2024-10-28 05:10:14.579579 -0700 PDT m=+20.952351793 (delta=429.900629ms)
	I1028 05:10:14.634658    9481 fix.go:200] guest clock delta is within tolerance: 429.900629ms
	I1028 05:10:14.634660    9481 start.go:83] releasing machines lock for "stopped-upgrade-451000", held for 20.898923292s
	I1028 05:10:14.634740    9481 ssh_runner.go:195] Run: cat /version.json
	I1028 05:10:14.634749    9481 sshutil.go:53] new ssh client: &{IP:localhost Port:58218 SSHKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/stopped-upgrade-451000/id_rsa Username:docker}
	I1028 05:10:14.634740    9481 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 05:10:14.634780    9481 sshutil.go:53] new ssh client: &{IP:localhost Port:58218 SSHKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/stopped-upgrade-451000/id_rsa Username:docker}
	W1028 05:10:14.635305    9481 sshutil.go:64] dial failure (will retry): dial tcp [::1]:58218: connect: connection refused
	I1028 05:10:14.635322    9481 retry.go:31] will retry after 352.776313ms: dial tcp [::1]:58218: connect: connection refused
	W1028 05:10:14.665278    9481 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1028 05:10:14.665325    9481 ssh_runner.go:195] Run: systemctl --version
	I1028 05:10:14.667147    9481 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 05:10:14.668748    9481 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 05:10:14.668782    9481 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1028 05:10:14.671462    9481 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1028 05:10:14.676558    9481 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 05:10:14.676566    9481 start.go:495] detecting cgroup driver to use...
	I1028 05:10:14.676654    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 05:10:14.683490    9481 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1028 05:10:14.686831    9481 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1028 05:10:14.689684    9481 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1028 05:10:14.689714    9481 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1028 05:10:14.692634    9481 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 05:10:14.696242    9481 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1028 05:10:14.699771    9481 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 05:10:14.703265    9481 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 05:10:14.706534    9481 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1028 05:10:14.709377    9481 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1028 05:10:14.712419    9481 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1028 05:10:14.715928    9481 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 05:10:14.719184    9481 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 05:10:14.721968    9481 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 05:10:14.809766    9481 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1028 05:10:14.816882    9481 start.go:495] detecting cgroup driver to use...
	I1028 05:10:14.816994    9481 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1028 05:10:14.829412    9481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 05:10:14.835498    9481 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 05:10:14.846137    9481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 05:10:14.851152    9481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 05:10:14.855904    9481 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1028 05:10:14.894820    9481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 05:10:14.899563    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 05:10:14.905224    9481 ssh_runner.go:195] Run: which cri-dockerd
	I1028 05:10:14.906515    9481 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1028 05:10:14.909129    9481 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1028 05:10:14.914202    9481 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1028 05:10:14.991736    9481 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1028 05:10:15.076674    9481 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1028 05:10:15.076743    9481 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1028 05:10:15.081803    9481 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 05:10:15.164594    9481 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 05:10:16.301507    9481 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.136921084s)
	I1028 05:10:16.301601    9481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1028 05:10:16.306135    9481 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1028 05:10:16.312378    9481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 05:10:16.317617    9481 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1028 05:10:16.388294    9481 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1028 05:10:16.476327    9481 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 05:10:16.555544    9481 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1028 05:10:16.561516    9481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 05:10:16.566026    9481 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 05:10:16.628481    9481 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1028 05:10:16.665897    9481 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1028 05:10:16.665997    9481 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1028 05:10:16.667901    9481 start.go:563] Will wait 60s for crictl version
	I1028 05:10:16.667958    9481 ssh_runner.go:195] Run: which crictl
	I1028 05:10:16.669735    9481 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 05:10:16.685128    9481 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1028 05:10:16.685208    9481 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 05:10:16.702639    9481 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 05:10:16.721992    9481 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1028 05:10:16.722140    9481 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1028 05:10:16.723501    9481 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 05:10:16.727301    9481 kubeadm.go:883] updating cluster {Name:stopped-upgrade-451000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:58252 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-451000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1028 05:10:16.727352    9481 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1028 05:10:16.727399    9481 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 05:10:16.737895    9481 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1028 05:10:16.737911    9481 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1028 05:10:16.737973    9481 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1028 05:10:16.740914    9481 ssh_runner.go:195] Run: which lz4
	I1028 05:10:16.742323    9481 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 05:10:16.743511    9481 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 05:10:16.743521    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1028 05:10:17.676318    9481 docker.go:653] duration metric: took 934.076292ms to copy over tarball
	I1028 05:10:17.676406    9481 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 05:10:18.865057    9481 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.188658417s)
	I1028 05:10:18.865072    9481 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 05:10:18.881379    9481 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1028 05:10:18.884582    9481 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1028 05:10:18.889859    9481 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 05:10:18.952373    9481 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 05:10:20.504533    9481 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.552177041s)
	I1028 05:10:20.504654    9481 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 05:10:20.517489    9481 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1028 05:10:20.517506    9481 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1028 05:10:20.517513    9481 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 05:10:20.521548    9481 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 05:10:20.523394    9481 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 05:10:20.525810    9481 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1028 05:10:20.525929    9481 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 05:10:20.528031    9481 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 05:10:20.528137    9481 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1028 05:10:20.529951    9481 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1028 05:10:20.529969    9481 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1028 05:10:20.531254    9481 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1028 05:10:20.531471    9481 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1028 05:10:20.532601    9481 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 05:10:20.532951    9481 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1028 05:10:20.533921    9481 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1028 05:10:20.534065    9481 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1028 05:10:20.534835    9481 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 05:10:20.535774    9481 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1028 05:10:21.079312    9481 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 05:10:21.090395    9481 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1028 05:10:21.090426    9481 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 05:10:21.090473    9481 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1028 05:10:21.098467    9481 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1028 05:10:21.103217    9481 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1028 05:10:21.110872    9481 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1028 05:10:21.110893    9481 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1028 05:10:21.110960    9481 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1028 05:10:21.121445    9481 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1028 05:10:21.124612    9481 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1028 05:10:21.136085    9481 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1028 05:10:21.136111    9481 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1028 05:10:21.136161    9481 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1028 05:10:21.147711    9481 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1028 05:10:21.209551    9481 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1028 05:10:21.220036    9481 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1028 05:10:21.220060    9481 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1028 05:10:21.220119    9481 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1028 05:10:21.225001    9481 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1028 05:10:21.231165    9481 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1028 05:10:21.239705    9481 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1028 05:10:21.239725    9481 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1028 05:10:21.239792    9481 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1028 05:10:21.250015    9481 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W1028 05:10:21.309077    9481 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1028 05:10:21.309224    9481 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1028 05:10:21.321306    9481 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1028 05:10:21.321327    9481 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 05:10:21.321397    9481 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1028 05:10:21.331607    9481 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1028 05:10:21.332433    9481 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1028 05:10:21.334061    9481 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1028 05:10:21.334077    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1028 05:10:21.357344    9481 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1028 05:10:21.376868    9481 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1028 05:10:21.376883    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1028 05:10:21.377564    9481 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1028 05:10:21.377582    9481 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1028 05:10:21.377652    9481 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	W1028 05:10:21.417495    9481 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1028 05:10:21.417615    9481 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 05:10:21.429002    9481 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1028 05:10:21.429053    9481 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1028 05:10:21.429183    9481 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1028 05:10:21.431087    9481 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1028 05:10:21.431103    9481 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 05:10:21.431151    9481 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 05:10:21.431589    9481 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1028 05:10:21.431603    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1028 05:10:21.446278    9481 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1028 05:10:21.446421    9481 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1028 05:10:21.448336    9481 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1028 05:10:21.448352    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1028 05:10:21.450253    9481 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1028 05:10:21.450262    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1028 05:10:21.498883    9481 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1028 05:10:21.498914    9481 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1028 05:10:21.498923    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1028 05:10:21.740257    9481 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1028 05:10:21.740296    9481 cache_images.go:92] duration metric: took 1.222795s to LoadCachedImages
	W1028 05:10:21.740339    9481 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I1028 05:10:21.740345    9481 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1028 05:10:21.740404    9481 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-451000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-451000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 05:10:21.740470    9481 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1028 05:10:21.758369    9481 cni.go:84] Creating CNI manager for ""
	I1028 05:10:21.758380    9481 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 05:10:21.758390    9481 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 05:10:21.758399    9481 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-451000 NodeName:stopped-upgrade-451000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 05:10:21.758477    9481 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-451000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 05:10:21.758544    9481 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1028 05:10:21.761677    9481 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 05:10:21.761717    9481 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 05:10:21.764310    9481 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1028 05:10:21.769522    9481 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 05:10:21.774297    9481 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1028 05:10:21.780007    9481 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1028 05:10:21.781412    9481 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 05:10:21.784870    9481 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 05:10:21.863701    9481 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 05:10:21.869345    9481 certs.go:68] Setting up /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000 for IP: 10.0.2.15
	I1028 05:10:21.869355    9481 certs.go:194] generating shared ca certs ...
	I1028 05:10:21.869364    9481 certs.go:226] acquiring lock for ca certs: {Name:mk596dd32716491232c9389abcfad3254ffdbfdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:10:21.869546    9481 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/ca.key
	I1028 05:10:21.869587    9481 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/proxy-client-ca.key
	I1028 05:10:21.869594    9481 certs.go:256] generating profile certs ...
	I1028 05:10:21.869658    9481 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/client.key
	I1028 05:10:21.869681    9481 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/apiserver.key.1585e0f0
	I1028 05:10:21.869692    9481 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/apiserver.crt.1585e0f0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1028 05:10:21.969010    9481 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/apiserver.crt.1585e0f0 ...
	I1028 05:10:21.969026    9481 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/apiserver.crt.1585e0f0: {Name:mkf639cf273112e125f85c493bba6c636444a0d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:10:21.969371    9481 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/apiserver.key.1585e0f0 ...
	I1028 05:10:21.969376    9481 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/apiserver.key.1585e0f0: {Name:mkcb4bec4b86434e343725edfa795749cf16a56f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:10:21.970100    9481 certs.go:381] copying /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/apiserver.crt.1585e0f0 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/apiserver.crt
	I1028 05:10:21.970253    9481 certs.go:385] copying /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/apiserver.key.1585e0f0 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/apiserver.key
	I1028 05:10:21.970396    9481 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/proxy-client.key
	I1028 05:10:21.970540    9481 certs.go:484] found cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/7452.pem (1338 bytes)
	W1028 05:10:21.970565    9481 certs.go:480] ignoring /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/7452_empty.pem, impossibly tiny 0 bytes
	I1028 05:10:21.970571    9481 certs.go:484] found cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 05:10:21.970591    9481 certs.go:484] found cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem (1082 bytes)
	I1028 05:10:21.970611    9481 certs.go:484] found cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem (1123 bytes)
	I1028 05:10:21.970628    9481 certs.go:484] found cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/key.pem (1675 bytes)
	I1028 05:10:21.970666    9481 certs.go:484] found cert: /Users/jenkins/minikube-integration/19875-6942/.minikube/files/etc/ssl/certs/74522.pem (1708 bytes)
	I1028 05:10:21.971035    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 05:10:21.978547    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 05:10:21.985774    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 05:10:21.992615    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 05:10:21.999605    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 05:10:22.006971    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 05:10:22.013987    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 05:10:22.020591    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 05:10:22.027967    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	I1028 05:10:22.035219    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/files/etc/ssl/certs/74522.pem --> /usr/share/ca-certificates/74522.pem (1708 bytes)
	I1028 05:10:22.041884    9481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19875-6942/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 05:10:22.048599    9481 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 05:10:22.053741    9481 ssh_runner.go:195] Run: openssl version
	I1028 05:10:22.055608    9481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/74522.pem && ln -fs /usr/share/ca-certificates/74522.pem /etc/ssl/certs/74522.pem"
	I1028 05:10:22.059328    9481 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/74522.pem
	I1028 05:10:22.060891    9481 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:54 /usr/share/ca-certificates/74522.pem
	I1028 05:10:22.060920    9481 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/74522.pem
	I1028 05:10:22.062895    9481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/74522.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 05:10:22.065803    9481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 05:10:22.068706    9481 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 05:10:22.070205    9481 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 12:06 /usr/share/ca-certificates/minikubeCA.pem
	I1028 05:10:22.070231    9481 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 05:10:22.071864    9481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 05:10:22.074986    9481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	I1028 05:10:22.077776    9481 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7452.pem
	I1028 05:10:22.079116    9481 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:54 /usr/share/ca-certificates/7452.pem
	I1028 05:10:22.079143    9481 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	I1028 05:10:22.080855    9481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	I1028 05:10:22.084186    9481 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 05:10:22.085691    9481 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 05:10:22.087695    9481 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 05:10:22.089799    9481 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 05:10:22.091615    9481 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 05:10:22.093319    9481 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 05:10:22.095027    9481 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 05:10:22.096812    9481 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-451000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:58252 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-451000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1028 05:10:22.096883    9481 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1028 05:10:22.106533    9481 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 05:10:22.109615    9481 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 05:10:22.109625    9481 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 05:10:22.109660    9481 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 05:10:22.112390    9481 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 05:10:22.112691    9481 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-451000" does not appear in /Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:10:22.112791    9481 kubeconfig.go:62] /Users/jenkins/minikube-integration/19875-6942/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-451000" cluster setting kubeconfig missing "stopped-upgrade-451000" context setting]
	I1028 05:10:22.112993    9481 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/kubeconfig: {Name:mk90a124f6c448e81120cf90ba82d6374e9cd851 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:10:22.113453    9481 kapi.go:59] client config for stopped-upgrade-451000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/client.key", CAFile:"/Users/jenkins/minikube-integration/19875-6942/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104a72680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 05:10:22.113827    9481 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 05:10:22.116456    9481 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-451000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1028 05:10:22.116461    9481 kubeadm.go:1160] stopping kube-system containers ...
	I1028 05:10:22.116510    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1028 05:10:22.128026    9481 docker.go:483] Stopping containers: [9954ffaa9f68 d14d16734881 84467d88e691 fc096b12f559 1798e6b77be3 47e6cfc87e4e be4344508268 f02184c9956d]
	I1028 05:10:22.128096    9481 ssh_runner.go:195] Run: docker stop 9954ffaa9f68 d14d16734881 84467d88e691 fc096b12f559 1798e6b77be3 47e6cfc87e4e be4344508268 f02184c9956d
	I1028 05:10:22.138884    9481 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 05:10:22.144574    9481 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 05:10:22.147936    9481 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 05:10:22.147942    9481 kubeadm.go:157] found existing configuration files:
	
	I1028 05:10:22.147976    9481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/admin.conf
	I1028 05:10:22.151122    9481 kubeadm.go:163] "https://control-plane.minikube.internal:58252" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 05:10:22.151150    9481 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 05:10:22.153669    9481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/kubelet.conf
	I1028 05:10:22.156223    9481 kubeadm.go:163] "https://control-plane.minikube.internal:58252" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 05:10:22.156246    9481 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 05:10:22.159184    9481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/controller-manager.conf
	I1028 05:10:22.161831    9481 kubeadm.go:163] "https://control-plane.minikube.internal:58252" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 05:10:22.161854    9481 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 05:10:22.164558    9481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/scheduler.conf
	I1028 05:10:22.167521    9481 kubeadm.go:163] "https://control-plane.minikube.internal:58252" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 05:10:22.167549    9481 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 05:10:22.170341    9481 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 05:10:22.172944    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 05:10:22.195553    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 05:10:22.571628    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 05:10:22.702429    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 05:10:22.736020    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 05:10:22.768284    9481 api_server.go:52] waiting for apiserver process to appear ...
	I1028 05:10:22.768388    9481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 05:10:23.270417    9481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 05:10:23.770423    9481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 05:10:23.774649    9481 api_server.go:72] duration metric: took 1.006386125s to wait for apiserver process to appear ...
	I1028 05:10:23.774660    9481 api_server.go:88] waiting for apiserver healthz status ...
	I1028 05:10:23.774676    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:28.776629    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:28.776675    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:33.776857    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:33.776897    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:38.777224    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:38.777272    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:43.777922    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:43.778013    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:48.779077    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:48.779102    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:53.779919    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:53.779962    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:10:58.781129    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:10:58.781164    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:03.782619    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:03.782649    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:08.784533    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:08.784567    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:13.785876    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:13.785909    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:18.787365    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:18.787466    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:23.789892    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:23.790072    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:11:23.806022    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:11:23.806124    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:11:23.819350    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:11:23.819425    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:11:23.830383    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:11:23.830475    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:11:23.841018    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:11:23.841089    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:11:23.851285    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:11:23.851358    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:11:23.868649    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:11:23.868732    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:11:23.879005    9481 logs.go:282] 0 containers: []
	W1028 05:11:23.879015    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:11:23.879080    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:11:23.899768    9481 logs.go:282] 0 containers: []
	W1028 05:11:23.899782    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:11:23.899790    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:11:23.899795    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:11:23.922245    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:11:23.922256    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:11:23.926311    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:11:23.926320    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:11:23.952570    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:11:23.952584    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:11:23.965559    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:11:23.965571    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:11:23.977659    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:11:23.977670    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:11:23.995670    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:11:23.995683    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:11:24.017965    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:11:24.017976    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:11:24.057234    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:11:24.057241    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:11:24.071301    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:11:24.071311    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:11:24.088259    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:11:24.088272    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:11:24.104411    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:11:24.104425    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:11:24.215633    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:11:24.215647    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:11:24.229577    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:11:24.229590    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:11:24.248900    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:11:24.248910    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:11:26.775436    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:31.775725    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:31.776109    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:11:31.808874    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:11:31.809033    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:11:31.832241    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:11:31.832349    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:11:31.845787    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:11:31.845876    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:11:31.857963    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:11:31.858057    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:11:31.869033    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:11:31.869115    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:11:31.885753    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:11:31.885835    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:11:31.896440    9481 logs.go:282] 0 containers: []
	W1028 05:11:31.896452    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:11:31.896513    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:11:31.907151    9481 logs.go:282] 0 containers: []
	W1028 05:11:31.907164    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:11:31.907185    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:11:31.907192    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:11:31.932037    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:11:31.932048    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:11:31.947699    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:11:31.947709    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:11:31.959761    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:11:31.959771    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:11:31.978648    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:11:31.978658    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:11:31.996282    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:11:31.996291    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:11:32.020579    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:11:32.020586    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:11:32.024756    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:11:32.024762    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:11:32.060015    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:11:32.060026    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:11:32.071834    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:11:32.071844    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:11:32.086017    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:11:32.086026    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:11:32.099981    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:11:32.099994    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:11:32.111937    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:11:32.111948    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:11:32.129027    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:11:32.129037    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:11:32.166419    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:11:32.166427    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:11:34.680411    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:39.682627    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:39.682822    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:11:39.698198    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:11:39.698292    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:11:39.711124    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:11:39.711200    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:11:39.721894    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:11:39.721974    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:11:39.732345    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:11:39.732420    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:11:39.742583    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:11:39.742651    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:11:39.753498    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:11:39.753578    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:11:39.763842    9481 logs.go:282] 0 containers: []
	W1028 05:11:39.763857    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:11:39.763919    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:11:39.774249    9481 logs.go:282] 0 containers: []
	W1028 05:11:39.774260    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:11:39.774267    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:11:39.774272    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:11:39.787101    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:11:39.787116    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:11:39.800013    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:11:39.800028    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:11:39.838091    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:11:39.838100    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:11:39.867839    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:11:39.867850    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:11:39.882150    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:11:39.882158    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:11:39.904279    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:11:39.904294    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:11:39.916439    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:11:39.916450    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:11:39.937603    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:11:39.937614    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:11:39.941640    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:11:39.941648    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:11:39.955663    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:11:39.955672    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:11:39.967101    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:11:39.967111    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:11:39.978571    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:11:39.978579    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:11:40.004132    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:11:40.004140    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:11:40.041103    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:11:40.041114    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:11:42.560523    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:47.562726    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:47.562829    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:11:47.573784    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:11:47.573865    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:11:47.584895    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:11:47.584968    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:11:47.595552    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:11:47.595639    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:11:47.606909    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:11:47.606993    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:11:47.617916    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:11:47.617988    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:11:47.629082    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:11:47.629161    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:11:47.639522    9481 logs.go:282] 0 containers: []
	W1028 05:11:47.639533    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:11:47.639593    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:11:47.650341    9481 logs.go:282] 0 containers: []
	W1028 05:11:47.650353    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:11:47.650362    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:11:47.650368    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:11:47.665651    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:11:47.665664    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:11:47.681876    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:11:47.681889    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:11:47.706755    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:11:47.706766    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:11:47.724099    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:11:47.724109    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:11:47.737340    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:11:47.737350    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:11:47.761416    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:11:47.761426    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:11:47.773231    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:11:47.773247    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:11:47.812113    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:11:47.812122    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:11:47.816206    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:11:47.816212    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:11:47.850125    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:11:47.850137    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:11:47.864002    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:11:47.864015    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:11:47.875777    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:11:47.875788    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:11:47.887287    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:11:47.887300    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:11:47.899127    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:11:47.899137    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:11:50.418936    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:11:55.421316    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:11:55.421753    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:11:55.452796    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:11:55.452940    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:11:55.471456    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:11:55.471564    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:11:55.485346    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:11:55.485435    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:11:55.496941    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:11:55.497030    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:11:55.508054    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:11:55.508133    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:11:55.518733    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:11:55.518813    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:11:55.528272    9481 logs.go:282] 0 containers: []
	W1028 05:11:55.528283    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:11:55.528344    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:11:55.538276    9481 logs.go:282] 0 containers: []
	W1028 05:11:55.538286    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:11:55.538294    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:11:55.538299    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:11:55.552995    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:11:55.553009    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:11:55.564378    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:11:55.564392    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:11:55.583400    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:11:55.583414    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:11:55.603889    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:11:55.603898    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:11:55.617756    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:11:55.617765    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:11:55.622687    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:11:55.622696    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:11:55.636758    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:11:55.636768    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:11:55.652127    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:11:55.652148    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:11:55.664636    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:11:55.664646    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:11:55.687956    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:11:55.687962    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:11:55.730528    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:11:55.730537    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:11:55.766440    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:11:55.766449    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:11:55.791610    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:11:55.791622    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:11:55.810549    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:11:55.810561    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:11:58.324697    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:03.326001    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:03.326241    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:03.348679    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:12:03.348818    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:03.365003    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:12:03.365098    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:03.386854    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:12:03.386936    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:03.400146    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:12:03.400229    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:03.410434    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:12:03.410504    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:03.421601    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:12:03.421674    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:03.431262    9481 logs.go:282] 0 containers: []
	W1028 05:12:03.431279    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:03.431350    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:03.441565    9481 logs.go:282] 0 containers: []
	W1028 05:12:03.441577    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:12:03.441584    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:03.441590    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:03.446267    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:12:03.446276    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:12:03.460580    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:12:03.460593    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:12:03.471967    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:12:03.471978    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:03.483776    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:03.483791    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:03.521947    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:12:03.521955    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:12:03.542241    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:12:03.542256    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:12:03.558808    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:12:03.558817    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:12:03.584191    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:12:03.584202    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:12:03.600971    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:12:03.600981    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:12:03.612812    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:12:03.612827    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:12:03.630436    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:12:03.630447    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:12:03.646932    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:12:03.646943    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:12:03.659000    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:03.659011    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:03.684568    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:03.684578    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:06.226813    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:11.229082    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:11.229342    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:11.246426    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:12:11.246527    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:11.259472    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:12:11.259548    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:11.270540    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:12:11.270624    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:11.281585    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:12:11.281660    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:11.292623    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:12:11.292696    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:11.303836    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:12:11.303915    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:11.313996    9481 logs.go:282] 0 containers: []
	W1028 05:12:11.314006    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:11.314064    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:11.324343    9481 logs.go:282] 0 containers: []
	W1028 05:12:11.324353    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:12:11.324360    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:11.324366    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:11.358513    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:12:11.358525    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:12:11.380576    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:12:11.380586    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:12:11.398318    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:12:11.398328    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:11.410220    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:12:11.410231    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:12:11.439785    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:12:11.439802    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:12:11.454969    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:12:11.454981    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:12:11.473393    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:12:11.473405    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:12:11.487333    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:11.487343    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:11.512106    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:11.512113    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:11.550557    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:11.550566    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:11.557032    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:12:11.557040    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:12:11.577893    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:12:11.577908    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:12:11.596843    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:12:11.596852    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:12:11.611509    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:12:11.611518    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:12:14.125671    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:19.127844    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:19.128106    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:19.153267    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:12:19.153369    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:19.168112    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:12:19.168201    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:19.180075    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:12:19.180159    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:19.191093    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:12:19.191173    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:19.201876    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:12:19.201962    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:19.212722    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:12:19.212807    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:19.223576    9481 logs.go:282] 0 containers: []
	W1028 05:12:19.223585    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:19.223645    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:19.234451    9481 logs.go:282] 0 containers: []
	W1028 05:12:19.234462    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:12:19.234470    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:12:19.234475    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:12:19.250190    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:19.250198    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:19.289047    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:12:19.289056    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:12:19.303409    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:12:19.303423    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:12:19.315424    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:19.315438    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:19.319434    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:12:19.319443    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:12:19.343961    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:19.343971    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:19.367172    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:12:19.367178    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:12:19.384094    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:12:19.384105    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:12:19.402655    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:12:19.402706    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:12:19.421939    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:12:19.421951    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:19.433822    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:12:19.433833    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:12:19.447651    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:12:19.447662    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:12:19.461407    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:12:19.461419    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:12:19.472991    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:19.473003    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:22.011320    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:27.013575    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:27.013952    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:27.039152    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:12:27.039288    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:27.056033    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:12:27.056127    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:27.069490    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:12:27.069576    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:27.081648    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:12:27.081734    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:27.091993    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:12:27.092072    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:27.102439    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:12:27.102516    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:27.113330    9481 logs.go:282] 0 containers: []
	W1028 05:12:27.113341    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:27.113406    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:27.125596    9481 logs.go:282] 0 containers: []
	W1028 05:12:27.125606    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:12:27.125615    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:12:27.125621    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:27.141163    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:12:27.141176    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:12:27.155754    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:12:27.155765    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:12:27.170615    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:12:27.170625    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:12:27.182524    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:12:27.182537    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:12:27.195909    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:27.195923    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:27.220981    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:12:27.220992    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:12:27.238902    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:27.238914    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:27.274850    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:12:27.274865    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:12:27.300538    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:12:27.300548    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:12:27.315296    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:27.315305    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:27.351674    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:27.351685    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:27.355520    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:12:27.355527    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:12:27.368788    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:12:27.368797    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:12:27.384076    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:12:27.384085    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:12:29.898248    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:34.900785    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:34.900933    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:34.915009    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:12:34.915094    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:34.925869    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:12:34.925975    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:34.936418    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:12:34.936502    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:34.946978    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:12:34.947056    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:34.957462    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:12:34.957540    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:34.968082    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:12:34.968158    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:34.978381    9481 logs.go:282] 0 containers: []
	W1028 05:12:34.978397    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:34.978468    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:34.988700    9481 logs.go:282] 0 containers: []
	W1028 05:12:34.988715    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:12:34.988723    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:12:34.988729    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:35.000949    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:35.000962    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:35.038641    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:35.038651    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:35.042878    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:12:35.042883    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:12:35.060213    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:12:35.060223    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:12:35.071994    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:35.072006    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:35.109768    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:12:35.109778    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:12:35.127338    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:12:35.127349    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:12:35.140923    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:12:35.140934    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:12:35.155855    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:12:35.155867    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:12:35.178065    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:35.178075    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:35.201563    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:12:35.201574    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:12:35.215847    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:12:35.215857    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:12:35.240383    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:12:35.240393    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:12:35.254182    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:12:35.254192    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:12:37.767735    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:42.768142    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:42.768399    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:42.786311    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:12:42.786413    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:42.803794    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:12:42.803875    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:42.815400    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:12:42.815477    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:42.826052    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:12:42.826128    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:42.836848    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:12:42.836921    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:42.847796    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:12:42.847875    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:42.857845    9481 logs.go:282] 0 containers: []
	W1028 05:12:42.857856    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:42.857909    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:42.868416    9481 logs.go:282] 0 containers: []
	W1028 05:12:42.868430    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:12:42.868437    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:42.868443    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:42.872687    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:12:42.872693    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:12:42.884638    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:12:42.884648    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:12:42.901956    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:12:42.901968    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:42.915189    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:42.915199    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:42.950469    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:12:42.950484    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:12:42.965593    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:12:42.965606    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:12:42.981257    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:42.981268    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:43.006634    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:43.006643    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:43.045953    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:12:43.045962    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:12:43.070629    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:12:43.070639    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:12:43.084530    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:12:43.084541    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:12:43.096184    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:12:43.096197    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:12:43.110772    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:12:43.110784    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:12:43.127629    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:12:43.127639    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:12:45.643351    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:50.645705    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:50.645963    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:50.666288    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:12:50.666390    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:50.680748    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:12:50.680834    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:50.693126    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:12:50.693205    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:50.705170    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:12:50.705244    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:50.715767    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:12:50.715846    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:50.726165    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:12:50.726240    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:50.740651    9481 logs.go:282] 0 containers: []
	W1028 05:12:50.740660    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:50.740720    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:50.751555    9481 logs.go:282] 0 containers: []
	W1028 05:12:50.751565    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:12:50.751574    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:50.751580    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:50.788353    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:12:50.788365    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:12:50.802205    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:12:50.802215    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:12:50.826774    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:12:50.826788    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:12:50.838215    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:12:50.838226    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:12:50.852266    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:12:50.852278    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:12:50.864154    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:12:50.864166    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:12:50.886401    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:12:50.886411    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:50.898418    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:50.898429    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:50.932646    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:12:50.932658    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:12:50.949119    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:12:50.949133    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:12:50.961089    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:12:50.961101    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:12:50.977627    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:50.977639    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:12:51.001430    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:51.001437    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:51.005976    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:12:51.005982    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:12:53.522787    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:12:58.524998    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:12:58.525250    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:12:58.547492    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:12:58.547599    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:12:58.562492    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:12:58.562580    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:12:58.574801    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:12:58.574873    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:12:58.585940    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:12:58.586020    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:12:58.596368    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:12:58.596448    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:12:58.610827    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:12:58.610894    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:12:58.624724    9481 logs.go:282] 0 containers: []
	W1028 05:12:58.624738    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:12:58.624810    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:12:58.635487    9481 logs.go:282] 0 containers: []
	W1028 05:12:58.635502    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:12:58.635510    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:12:58.635516    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:12:58.647284    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:12:58.647295    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:12:58.658741    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:12:58.658752    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:12:58.675956    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:12:58.675966    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:12:58.714656    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:12:58.714664    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:12:58.718533    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:12:58.718541    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:12:58.758207    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:12:58.758221    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:12:58.772208    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:12:58.772217    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:12:58.785409    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:12:58.785420    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:12:58.799565    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:12:58.799576    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:12:58.813547    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:12:58.813557    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:12:58.824724    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:12:58.824734    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:12:58.836683    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:12:58.836694    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:12:58.865270    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:12:58.865280    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:12:58.880192    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:12:58.880203    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:01.408846    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:06.411074    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:06.411201    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:06.422360    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:13:06.422457    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:06.436561    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:13:06.436642    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:06.447104    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:13:06.447181    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:06.457518    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:13:06.457587    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:06.468268    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:13:06.468334    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:06.478665    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:13:06.478738    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:06.490156    9481 logs.go:282] 0 containers: []
	W1028 05:13:06.490167    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:06.490232    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:06.504210    9481 logs.go:282] 0 containers: []
	W1028 05:13:06.504224    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:13:06.504234    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:13:06.504241    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:13:06.518844    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:13:06.518854    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:13:06.531880    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:13:06.531891    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:13:06.546865    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:13:06.546875    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:13:06.558637    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:13:06.558647    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:13:06.572134    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:13:06.572146    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:13:06.586728    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:13:06.586739    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:13:06.601407    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:13:06.601419    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:13:06.612839    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:13:06.612849    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:13:06.630168    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:13:06.630177    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:06.641667    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:06.641677    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:06.680646    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:06.680653    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:06.704077    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:06.704084    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:06.739577    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:13:06.739589    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:13:06.763673    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:06.763683    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:09.269804    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:14.272014    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:14.272193    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:14.287757    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:13:14.287841    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:14.298369    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:13:14.298452    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:14.309076    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:13:14.309146    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:14.327264    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:13:14.327345    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:14.337538    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:13:14.337602    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:14.347644    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:13:14.347717    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:14.357954    9481 logs.go:282] 0 containers: []
	W1028 05:13:14.357968    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:14.358034    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:14.368701    9481 logs.go:282] 0 containers: []
	W1028 05:13:14.368711    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:13:14.368720    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:13:14.368725    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:13:14.379927    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:13:14.379940    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:13:14.391672    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:13:14.391684    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:13:14.403762    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:14.403772    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:14.426403    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:13:14.426410    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:13:14.443534    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:13:14.443546    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:13:14.468778    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:13:14.468788    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:13:14.484994    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:13:14.485004    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:13:14.499278    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:13:14.499288    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:13:14.512223    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:14.512232    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:14.548313    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:13:14.548324    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:13:14.565524    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:14.565534    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:14.602801    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:14.602809    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:14.606880    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:13:14.606889    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:13:14.622770    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:13:14.622781    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:17.136541    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:22.138660    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:22.138888    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:22.174728    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:13:22.174821    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:22.200392    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:13:22.200474    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:22.211567    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:13:22.211642    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:22.225978    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:13:22.226060    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:22.237023    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:13:22.237096    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:22.247718    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:13:22.247796    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:22.258168    9481 logs.go:282] 0 containers: []
	W1028 05:13:22.258182    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:22.258253    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:22.268151    9481 logs.go:282] 0 containers: []
	W1028 05:13:22.268163    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:13:22.268172    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:13:22.268178    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:13:22.292338    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:13:22.292351    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:13:22.311799    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:13:22.311812    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:13:22.327534    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:13:22.327547    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:13:22.339626    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:22.339640    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:22.377694    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:13:22.377703    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:13:22.393849    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:22.393862    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:22.429521    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:13:22.429534    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:13:22.444396    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:13:22.444409    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:13:22.462772    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:13:22.462782    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:13:22.476660    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:22.476675    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:22.481383    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:13:22.481389    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:13:22.496602    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:22.496612    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:22.520849    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:13:22.520856    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:22.532554    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:13:22.532564    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:13:25.049564    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:30.050409    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:30.051078    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:30.090603    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:13:30.090756    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:30.112165    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:13:30.112267    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:30.129075    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:13:30.129166    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:30.141681    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:13:30.141762    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:30.152710    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:13:30.152788    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:30.163612    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:13:30.163694    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:30.173800    9481 logs.go:282] 0 containers: []
	W1028 05:13:30.173819    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:30.173891    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:30.185446    9481 logs.go:282] 0 containers: []
	W1028 05:13:30.185457    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:13:30.185467    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:13:30.185472    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:13:30.203323    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:13:30.203335    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:13:30.214988    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:13:30.214999    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:13:30.240868    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:13:30.240878    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:13:30.257979    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:13:30.257993    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:13:30.276166    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:13:30.276177    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:13:30.290306    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:13:30.290316    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:13:30.305440    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:13:30.305452    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:13:30.322913    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:30.322923    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:30.327183    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:30.327189    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:30.362439    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:13:30.362455    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:13:30.380122    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:13:30.380133    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:13:30.393954    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:30.393964    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:30.418047    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:13:30.418061    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:30.430439    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:30.430452    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:32.972467    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:37.975190    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:37.975547    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:38.003922    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:13:38.004070    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:38.022445    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:13:38.022550    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:38.036007    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:13:38.036095    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:38.047613    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:13:38.047686    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:38.058208    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:13:38.058286    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:38.068862    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:13:38.068930    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:38.079663    9481 logs.go:282] 0 containers: []
	W1028 05:13:38.079674    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:38.079743    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:38.089664    9481 logs.go:282] 0 containers: []
	W1028 05:13:38.089674    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:13:38.089683    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:38.089689    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:38.093876    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:13:38.093884    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:13:38.110955    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:13:38.110965    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:13:38.126853    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:38.126863    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:38.149490    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:13:38.149501    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:13:38.161456    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:13:38.161468    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:38.172993    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:38.173003    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:38.207925    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:13:38.207935    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:13:38.233682    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:13:38.233692    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:13:38.267882    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:38.267892    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:38.307861    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:13:38.307883    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:13:38.322996    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:13:38.323006    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:13:38.352856    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:13:38.352869    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:13:38.365494    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:13:38.365508    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:13:38.377568    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:13:38.377580    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:13:40.898574    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:45.901249    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:45.901708    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:45.931810    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:13:45.931960    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:45.956318    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:13:45.956413    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:45.973923    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:13:45.973998    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:45.984176    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:13:45.984262    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:45.994568    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:13:45.994646    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:46.005235    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:13:46.005303    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:46.015756    9481 logs.go:282] 0 containers: []
	W1028 05:13:46.015768    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:46.015836    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:46.026296    9481 logs.go:282] 0 containers: []
	W1028 05:13:46.026307    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:13:46.026316    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:13:46.026321    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:13:46.050955    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:13:46.050964    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:13:46.064931    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:46.064941    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:46.102234    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:46.102245    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:46.106677    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:13:46.106686    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:13:46.124249    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:13:46.124262    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:13:46.136384    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:13:46.136394    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:13:46.149825    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:13:46.149834    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:13:46.165819    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:13:46.165832    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:13:46.183808    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:13:46.183817    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:13:46.198391    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:46.198402    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:46.236741    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:13:46.236752    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:13:46.252745    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:13:46.252760    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:13:46.264862    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:46.264873    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:46.288885    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:13:46.288902    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:48.826941    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:13:53.829230    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:13:53.829533    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:13:53.855184    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:13:53.855314    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:13:53.872704    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:13:53.872804    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:13:53.886656    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:13:53.886730    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:13:53.901157    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:13:53.901230    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:13:53.912484    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:13:53.912567    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:13:53.923042    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:13:53.923122    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:13:53.932606    9481 logs.go:282] 0 containers: []
	W1028 05:13:53.932623    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:13:53.932681    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:13:53.943315    9481 logs.go:282] 0 containers: []
	W1028 05:13:53.943326    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:13:53.943334    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:13:53.943339    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:13:53.954706    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:13:53.954716    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:13:53.966548    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:13:53.966559    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:13:53.979519    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:13:53.979528    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:13:53.983991    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:13:53.984001    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:13:54.010149    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:13:54.010159    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:13:54.029062    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:13:54.029074    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:13:54.053824    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:13:54.053842    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:13:54.092818    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:13:54.092832    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:13:54.112716    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:13:54.112730    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:13:54.126554    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:13:54.126566    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:13:54.144002    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:13:54.144013    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:13:54.162741    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:13:54.162755    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:13:54.175637    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:13:54.175650    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:13:54.216984    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:13:54.217002    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:13:56.734684    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:01.737269    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:01.737612    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:14:01.764035    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:14:01.764172    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:14:01.781472    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:14:01.781570    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:14:01.794874    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:14:01.794952    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:14:01.806194    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:14:01.806277    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:14:01.817204    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:14:01.817294    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:14:01.827893    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:14:01.827971    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:14:01.838675    9481 logs.go:282] 0 containers: []
	W1028 05:14:01.838686    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:14:01.838752    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:14:01.848936    9481 logs.go:282] 0 containers: []
	W1028 05:14:01.848947    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:14:01.848956    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:14:01.848962    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:14:01.890213    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:14:01.890225    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:14:01.916866    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:14:01.916883    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:14:01.932191    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:14:01.932201    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:14:01.949705    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:14:01.949717    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:14:01.973757    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:14:01.973777    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:14:02.013239    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:14:02.013251    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:14:02.031920    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:14:02.031932    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:14:02.046833    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:14:02.046850    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:14:02.059901    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:14:02.059915    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:14:02.073066    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:14:02.073078    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:14:02.077534    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:14:02.077546    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:14:02.090120    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:14:02.090131    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:14:02.106369    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:14:02.106384    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:14:02.127217    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:14:02.127231    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:14:04.641697    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:09.643885    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:09.644045    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:14:09.656203    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:14:09.656286    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:14:09.667273    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:14:09.667358    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:14:09.681968    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:14:09.682044    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:14:09.692580    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:14:09.692658    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:14:09.703147    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:14:09.703217    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:14:09.714062    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:14:09.714128    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:14:09.724210    9481 logs.go:282] 0 containers: []
	W1028 05:14:09.724223    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:14:09.724291    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:14:09.735547    9481 logs.go:282] 0 containers: []
	W1028 05:14:09.735561    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:14:09.735570    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:14:09.735576    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:14:09.779006    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:14:09.779029    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:14:09.784110    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:14:09.784119    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:14:09.800999    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:14:09.801015    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:14:09.823544    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:14:09.823554    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:14:09.847745    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:14:09.847759    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:14:09.887953    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:14:09.887965    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:14:09.904406    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:14:09.904417    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:14:09.916897    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:14:09.916910    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:14:09.943238    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:14:09.943252    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:14:09.959416    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:14:09.959428    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:14:09.974240    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:14:09.974251    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:14:09.987232    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:14:09.987243    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:14:10.000816    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:14:10.000831    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:14:10.013724    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:14:10.013741    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:14:12.539100    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:17.541459    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:17.541963    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:14:17.581917    9481 logs.go:282] 2 containers: [c488bd8e6e66 fc096b12f559]
	I1028 05:14:17.582038    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:14:17.600249    9481 logs.go:282] 2 containers: [2dc02bd3b294 d14d16734881]
	I1028 05:14:17.600384    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:14:17.614282    9481 logs.go:282] 1 containers: [def2e716e84e]
	I1028 05:14:17.614340    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:14:17.626445    9481 logs.go:282] 2 containers: [080c62d9e150 9954ffaa9f68]
	I1028 05:14:17.626497    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:14:17.638204    9481 logs.go:282] 1 containers: [227e19d4bf06]
	I1028 05:14:17.638256    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:14:17.649855    9481 logs.go:282] 2 containers: [60e2687677c0 84467d88e691]
	I1028 05:14:17.649910    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:14:17.661612    9481 logs.go:282] 0 containers: []
	W1028 05:14:17.661620    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:14:17.661661    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:14:17.672714    9481 logs.go:282] 0 containers: []
	W1028 05:14:17.672724    9481 logs.go:284] No container was found matching "storage-provisioner"
	I1028 05:14:17.672731    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:14:17.672738    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:14:17.710645    9481 logs.go:123] Gathering logs for kube-scheduler [080c62d9e150] ...
	I1028 05:14:17.710663    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080c62d9e150"
	I1028 05:14:17.724604    9481 logs.go:123] Gathering logs for kube-controller-manager [60e2687677c0] ...
	I1028 05:14:17.724615    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e2687677c0"
	I1028 05:14:17.742972    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:14:17.742986    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:14:17.755323    9481 logs.go:123] Gathering logs for kube-apiserver [fc096b12f559] ...
	I1028 05:14:17.755337    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc096b12f559"
	I1028 05:14:17.782295    9481 logs.go:123] Gathering logs for kube-controller-manager [84467d88e691] ...
	I1028 05:14:17.782314    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84467d88e691"
	I1028 05:14:17.797128    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:14:17.797142    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:14:17.801662    9481 logs.go:123] Gathering logs for kube-apiserver [c488bd8e6e66] ...
	I1028 05:14:17.801670    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c488bd8e6e66"
	I1028 05:14:17.816573    9481 logs.go:123] Gathering logs for etcd [2dc02bd3b294] ...
	I1028 05:14:17.816582    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dc02bd3b294"
	I1028 05:14:17.831920    9481 logs.go:123] Gathering logs for kube-scheduler [9954ffaa9f68] ...
	I1028 05:14:17.831932    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9954ffaa9f68"
	I1028 05:14:17.848425    9481 logs.go:123] Gathering logs for kube-proxy [227e19d4bf06] ...
	I1028 05:14:17.848434    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227e19d4bf06"
	I1028 05:14:17.863373    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:14:17.863383    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:14:17.888683    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:14:17.888692    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:14:17.931615    9481 logs.go:123] Gathering logs for etcd [d14d16734881] ...
	I1028 05:14:17.931633    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d14d16734881"
	I1028 05:14:17.948342    9481 logs.go:123] Gathering logs for coredns [def2e716e84e] ...
	I1028 05:14:17.948355    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def2e716e84e"
	I1028 05:14:20.462579    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:25.465341    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:25.465511    9481 kubeadm.go:597] duration metric: took 4m3.361189583s to restartPrimaryControlPlane
	W1028 05:14:25.465660    9481 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 05:14:25.465717    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1028 05:14:26.482087    9481 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.016377708s)
	I1028 05:14:26.482160    9481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 05:14:26.487166    9481 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 05:14:26.490201    9481 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 05:14:26.492900    9481 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 05:14:26.492907    9481 kubeadm.go:157] found existing configuration files:
	
	I1028 05:14:26.492936    9481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/admin.conf
	I1028 05:14:26.495440    9481 kubeadm.go:163] "https://control-plane.minikube.internal:58252" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 05:14:26.495470    9481 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 05:14:26.498724    9481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/kubelet.conf
	I1028 05:14:26.501590    9481 kubeadm.go:163] "https://control-plane.minikube.internal:58252" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 05:14:26.501615    9481 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 05:14:26.504387    9481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/controller-manager.conf
	I1028 05:14:26.507149    9481 kubeadm.go:163] "https://control-plane.minikube.internal:58252" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 05:14:26.507183    9481 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 05:14:26.510278    9481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/scheduler.conf
	I1028 05:14:26.513035    9481 kubeadm.go:163] "https://control-plane.minikube.internal:58252" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:58252 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 05:14:26.513069    9481 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 05:14:26.515718    9481 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 05:14:26.534934    9481 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1028 05:14:26.535039    9481 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 05:14:26.590171    9481 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 05:14:26.590226    9481 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 05:14:26.590277    9481 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 05:14:26.642809    9481 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 05:14:26.647017    9481 out.go:235]   - Generating certificates and keys ...
	I1028 05:14:26.647061    9481 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 05:14:26.647102    9481 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 05:14:26.647141    9481 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 05:14:26.647173    9481 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 05:14:26.647212    9481 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 05:14:26.647264    9481 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 05:14:26.647308    9481 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 05:14:26.647351    9481 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 05:14:26.647388    9481 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 05:14:26.647426    9481 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 05:14:26.647444    9481 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 05:14:26.647469    9481 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 05:14:26.682146    9481 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 05:14:26.728942    9481 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 05:14:26.805502    9481 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 05:14:26.903231    9481 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 05:14:26.937209    9481 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 05:14:26.937578    9481 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 05:14:26.937642    9481 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 05:14:27.029289    9481 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 05:14:27.033231    9481 out.go:235]   - Booting up control plane ...
	I1028 05:14:27.033277    9481 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 05:14:27.033326    9481 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 05:14:27.033361    9481 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 05:14:27.033399    9481 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 05:14:27.033762    9481 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 05:14:31.035809    9481 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001872 seconds
	I1028 05:14:31.035873    9481 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 05:14:31.039422    9481 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 05:14:31.551281    9481 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 05:14:31.551510    9481 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-451000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 05:14:32.055697    9481 kubeadm.go:310] [bootstrap-token] Using token: 6anzvo.rhr2ma4rf8dnbyau
	I1028 05:14:32.062178    9481 out.go:235]   - Configuring RBAC rules ...
	I1028 05:14:32.062247    9481 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 05:14:32.062294    9481 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 05:14:32.064627    9481 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 05:14:32.068573    9481 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 05:14:32.069518    9481 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 05:14:32.070454    9481 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 05:14:32.073628    9481 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 05:14:32.226941    9481 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 05:14:32.461823    9481 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 05:14:32.462341    9481 kubeadm.go:310] 
	I1028 05:14:32.462378    9481 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 05:14:32.462384    9481 kubeadm.go:310] 
	I1028 05:14:32.462427    9481 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 05:14:32.462431    9481 kubeadm.go:310] 
	I1028 05:14:32.462448    9481 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 05:14:32.462483    9481 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 05:14:32.462515    9481 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 05:14:32.462530    9481 kubeadm.go:310] 
	I1028 05:14:32.462577    9481 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 05:14:32.462582    9481 kubeadm.go:310] 
	I1028 05:14:32.462611    9481 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 05:14:32.462616    9481 kubeadm.go:310] 
	I1028 05:14:32.462647    9481 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 05:14:32.462714    9481 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 05:14:32.462773    9481 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 05:14:32.462780    9481 kubeadm.go:310] 
	I1028 05:14:32.462851    9481 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 05:14:32.462908    9481 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 05:14:32.462914    9481 kubeadm.go:310] 
	I1028 05:14:32.462974    9481 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6anzvo.rhr2ma4rf8dnbyau \
	I1028 05:14:32.463038    9481 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f88359f6e177cdd7e99d84e4e5a9c564acd9fd4ce77f443c1fef5fb70f89e325 \
	I1028 05:14:32.463064    9481 kubeadm.go:310] 	--control-plane 
	I1028 05:14:32.463067    9481 kubeadm.go:310] 
	I1028 05:14:32.463120    9481 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 05:14:32.463124    9481 kubeadm.go:310] 
	I1028 05:14:32.463180    9481 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6anzvo.rhr2ma4rf8dnbyau \
	I1028 05:14:32.463244    9481 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f88359f6e177cdd7e99d84e4e5a9c564acd9fd4ce77f443c1fef5fb70f89e325 
	I1028 05:14:32.463371    9481 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 05:14:32.463383    9481 cni.go:84] Creating CNI manager for ""
	I1028 05:14:32.463392    9481 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 05:14:32.467581    9481 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 05:14:32.474732    9481 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 05:14:32.478140    9481 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 05:14:32.483917    9481 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 05:14:32.484000    9481 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 05:14:32.484027    9481 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-451000 minikube.k8s.io/updated_at=2024_10_28T05_14_32_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f minikube.k8s.io/name=stopped-upgrade-451000 minikube.k8s.io/primary=true
	I1028 05:14:32.532876    9481 ops.go:34] apiserver oom_adj: -16
	I1028 05:14:32.532882    9481 kubeadm.go:1113] duration metric: took 48.929333ms to wait for elevateKubeSystemPrivileges
	I1028 05:14:32.532890    9481 kubeadm.go:394] duration metric: took 4m10.441555458s to StartCluster
	I1028 05:14:32.532908    9481 settings.go:142] acquiring lock: {Name:mka2e81574940ea53fced239aa2ef4cd7479a0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:14:32.533011    9481 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:14:32.533467    9481 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/kubeconfig: {Name:mk90a124f6c448e81120cf90ba82d6374e9cd851 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:14:32.533689    9481 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:14:32.533694    9481 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 05:14:32.533733    9481 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-451000"
	I1028 05:14:32.533740    9481 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-451000"
	W1028 05:14:32.533743    9481 addons.go:243] addon storage-provisioner should already be in state true
	I1028 05:14:32.533755    9481 host.go:66] Checking if "stopped-upgrade-451000" exists ...
	I1028 05:14:32.533762    9481 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-451000"
	I1028 05:14:32.533772    9481 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-451000"
	I1028 05:14:32.533835    9481 config.go:182] Loaded profile config "stopped-upgrade-451000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 05:14:32.536491    9481 out.go:177] * Verifying Kubernetes components...
	I1028 05:14:32.537163    9481 kapi.go:59] client config for stopped-upgrade-451000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/stopped-upgrade-451000/client.key", CAFile:"/Users/jenkins/minikube-integration/19875-6942/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104a72680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 05:14:32.540960    9481 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-451000"
	W1028 05:14:32.540964    9481 addons.go:243] addon default-storageclass should already be in state true
	I1028 05:14:32.540972    9481 host.go:66] Checking if "stopped-upgrade-451000" exists ...
	I1028 05:14:32.541486    9481 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 05:14:32.541491    9481 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 05:14:32.541497    9481 sshutil.go:53] new ssh client: &{IP:localhost Port:58218 SSHKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/stopped-upgrade-451000/id_rsa Username:docker}
	I1028 05:14:32.544569    9481 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 05:14:32.548563    9481 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 05:14:32.552589    9481 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 05:14:32.552595    9481 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 05:14:32.552602    9481 sshutil.go:53] new ssh client: &{IP:localhost Port:58218 SSHKeyPath:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/stopped-upgrade-451000/id_rsa Username:docker}
	I1028 05:14:32.641269    9481 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 05:14:32.645951    9481 api_server.go:52] waiting for apiserver process to appear ...
	I1028 05:14:32.646002    9481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 05:14:32.650029    9481 api_server.go:72] duration metric: took 116.331125ms to wait for apiserver process to appear ...
	I1028 05:14:32.650037    9481 api_server.go:88] waiting for apiserver healthz status ...
	I1028 05:14:32.650045    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:32.663958    9481 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 05:14:32.684371    9481 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 05:14:33.033580    9481 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 05:14:33.033592    9481 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 05:14:37.652068    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:37.652114    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:42.652458    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:42.652483    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:47.652775    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:47.652816    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:52.653461    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:52.653484    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:14:57.654084    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:14:57.654110    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:15:02.654915    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:15:02.654954    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1028 05:15:03.035367    9481 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1028 05:15:03.044644    9481 out.go:177] * Enabled addons: storage-provisioner
	I1028 05:15:03.051679    9481 addons.go:510] duration metric: took 30.518649291s for enable addons: enabled=[storage-provisioner]
	I1028 05:15:07.655966    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:15:07.656018    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:15:12.657446    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:15:12.657470    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:15:17.659230    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:15:17.659287    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:15:22.661279    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:15:22.661312    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:15:27.663419    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:15:27.663473    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:15:32.665736    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:15:32.665917    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:15:32.687642    9481 logs.go:282] 1 containers: [220702e4be0e]
	I1028 05:15:32.687719    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:15:32.704003    9481 logs.go:282] 1 containers: [e620bd415877]
	I1028 05:15:32.704085    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:15:32.714421    9481 logs.go:282] 2 containers: [a96a7852c393 bc4f56bb0d90]
	I1028 05:15:32.714500    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:15:32.724384    9481 logs.go:282] 1 containers: [704c253f9667]
	I1028 05:15:32.724453    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:15:32.734799    9481 logs.go:282] 1 containers: [0945f87d2543]
	I1028 05:15:32.734882    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:15:32.745709    9481 logs.go:282] 1 containers: [0ab7a91f70bd]
	I1028 05:15:32.745774    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:15:32.756077    9481 logs.go:282] 0 containers: []
	W1028 05:15:32.756086    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:15:32.756143    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:15:32.767184    9481 logs.go:282] 1 containers: [e3bf74c67489]
	I1028 05:15:32.767198    9481 logs.go:123] Gathering logs for coredns [a96a7852c393] ...
	I1028 05:15:32.767203    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a96a7852c393"
	I1028 05:15:32.778698    9481 logs.go:123] Gathering logs for coredns [bc4f56bb0d90] ...
	I1028 05:15:32.778712    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc4f56bb0d90"
	I1028 05:15:32.790404    9481 logs.go:123] Gathering logs for kube-scheduler [704c253f9667] ...
	I1028 05:15:32.790417    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704c253f9667"
	I1028 05:15:32.805443    9481 logs.go:123] Gathering logs for kube-proxy [0945f87d2543] ...
	I1028 05:15:32.805455    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0945f87d2543"
	I1028 05:15:32.816946    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:15:32.816957    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:15:32.830028    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:15:32.830038    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:15:32.864634    9481 logs.go:123] Gathering logs for kube-apiserver [220702e4be0e] ...
	I1028 05:15:32.864641    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 220702e4be0e"
	I1028 05:15:32.882794    9481 logs.go:123] Gathering logs for etcd [e620bd415877] ...
	I1028 05:15:32.882805    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e620bd415877"
	I1028 05:15:32.897809    9481 logs.go:123] Gathering logs for kube-controller-manager [0ab7a91f70bd] ...
	I1028 05:15:32.897821    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab7a91f70bd"
	I1028 05:15:32.914342    9481 logs.go:123] Gathering logs for storage-provisioner [e3bf74c67489] ...
	I1028 05:15:32.914354    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3bf74c67489"
	I1028 05:15:32.925682    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:15:32.925692    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:15:32.950506    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:15:32.950519    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:15:32.954883    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:15:32.954889    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:15:35.500357    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:15:40.502977    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:15:40.503449    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:15:40.537467    9481 logs.go:282] 1 containers: [220702e4be0e]
	I1028 05:15:40.537606    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:15:40.557916    9481 logs.go:282] 1 containers: [e620bd415877]
	I1028 05:15:40.558016    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:15:40.572057    9481 logs.go:282] 2 containers: [a96a7852c393 bc4f56bb0d90]
	I1028 05:15:40.572135    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:15:40.583730    9481 logs.go:282] 1 containers: [704c253f9667]
	I1028 05:15:40.583810    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:15:40.594699    9481 logs.go:282] 1 containers: [0945f87d2543]
	I1028 05:15:40.594767    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:15:40.609178    9481 logs.go:282] 1 containers: [0ab7a91f70bd]
	I1028 05:15:40.609267    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:15:40.619741    9481 logs.go:282] 0 containers: []
	W1028 05:15:40.619753    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:15:40.619821    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:15:40.630458    9481 logs.go:282] 1 containers: [e3bf74c67489]
	I1028 05:15:40.630474    9481 logs.go:123] Gathering logs for coredns [bc4f56bb0d90] ...
	I1028 05:15:40.630479    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc4f56bb0d90"
	I1028 05:15:40.649651    9481 logs.go:123] Gathering logs for kube-proxy [0945f87d2543] ...
	I1028 05:15:40.649666    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0945f87d2543"
	I1028 05:15:40.662071    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:15:40.662080    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:15:40.674465    9481 logs.go:123] Gathering logs for coredns [a96a7852c393] ...
	I1028 05:15:40.674477    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a96a7852c393"
	I1028 05:15:40.690100    9481 logs.go:123] Gathering logs for kube-scheduler [704c253f9667] ...
	I1028 05:15:40.690108    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704c253f9667"
	I1028 05:15:40.705572    9481 logs.go:123] Gathering logs for kube-controller-manager [0ab7a91f70bd] ...
	I1028 05:15:40.705584    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab7a91f70bd"
	I1028 05:15:40.723696    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:15:40.723708    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:15:40.759970    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:15:40.759980    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:15:40.767560    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:15:40.767573    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:15:40.802035    9481 logs.go:123] Gathering logs for kube-apiserver [220702e4be0e] ...
	I1028 05:15:40.802049    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 220702e4be0e"
	I1028 05:15:40.816441    9481 logs.go:123] Gathering logs for etcd [e620bd415877] ...
	I1028 05:15:40.816454    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e620bd415877"
	I1028 05:15:40.830177    9481 logs.go:123] Gathering logs for storage-provisioner [e3bf74c67489] ...
	I1028 05:15:40.830190    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3bf74c67489"
	I1028 05:15:40.842190    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:15:40.842200    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:15:43.367401    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:15:48.370261    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:15:48.370846    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:15:48.407327    9481 logs.go:282] 1 containers: [220702e4be0e]
	I1028 05:15:48.407475    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:15:48.429691    9481 logs.go:282] 1 containers: [e620bd415877]
	I1028 05:15:48.429794    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:15:48.444608    9481 logs.go:282] 2 containers: [a96a7852c393 bc4f56bb0d90]
	I1028 05:15:48.444696    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:15:48.457186    9481 logs.go:282] 1 containers: [704c253f9667]
	I1028 05:15:48.457259    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:15:48.468142    9481 logs.go:282] 1 containers: [0945f87d2543]
	I1028 05:15:48.468230    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:15:48.478897    9481 logs.go:282] 1 containers: [0ab7a91f70bd]
	I1028 05:15:48.478960    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:15:48.489777    9481 logs.go:282] 0 containers: []
	W1028 05:15:48.489785    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:15:48.489839    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:15:48.500245    9481 logs.go:282] 1 containers: [e3bf74c67489]
	I1028 05:15:48.500262    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:15:48.500267    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:15:48.511467    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:15:48.511481    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:15:48.545518    9481 logs.go:123] Gathering logs for kube-apiserver [220702e4be0e] ...
	I1028 05:15:48.545531    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 220702e4be0e"
	I1028 05:15:48.560977    9481 logs.go:123] Gathering logs for coredns [a96a7852c393] ...
	I1028 05:15:48.560990    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a96a7852c393"
	I1028 05:15:48.573341    9481 logs.go:123] Gathering logs for coredns [bc4f56bb0d90] ...
	I1028 05:15:48.573354    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc4f56bb0d90"
	I1028 05:15:48.584674    9481 logs.go:123] Gathering logs for storage-provisioner [e3bf74c67489] ...
	I1028 05:15:48.584688    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3bf74c67489"
	I1028 05:15:48.597309    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:15:48.597320    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:15:48.621351    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:15:48.621359    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:15:48.655222    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:15:48.655231    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:15:48.659646    9481 logs.go:123] Gathering logs for etcd [e620bd415877] ...
	I1028 05:15:48.659656    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e620bd415877"
	I1028 05:15:48.673387    9481 logs.go:123] Gathering logs for kube-scheduler [704c253f9667] ...
	I1028 05:15:48.673395    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704c253f9667"
	I1028 05:15:48.688579    9481 logs.go:123] Gathering logs for kube-proxy [0945f87d2543] ...
	I1028 05:15:48.688592    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0945f87d2543"
	I1028 05:15:48.700645    9481 logs.go:123] Gathering logs for kube-controller-manager [0ab7a91f70bd] ...
	I1028 05:15:48.700658    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab7a91f70bd"
	I1028 05:15:51.219786    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:15:56.222524    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:15:56.223010    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:15:56.267967    9481 logs.go:282] 1 containers: [220702e4be0e]
	I1028 05:15:56.268112    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:15:56.288474    9481 logs.go:282] 1 containers: [e620bd415877]
	I1028 05:15:56.288588    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:15:56.303216    9481 logs.go:282] 2 containers: [a96a7852c393 bc4f56bb0d90]
	I1028 05:15:56.303307    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:15:56.316368    9481 logs.go:282] 1 containers: [704c253f9667]
	I1028 05:15:56.316451    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:15:56.326824    9481 logs.go:282] 1 containers: [0945f87d2543]
	I1028 05:15:56.326899    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:15:56.337758    9481 logs.go:282] 1 containers: [0ab7a91f70bd]
	I1028 05:15:56.337832    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:15:56.348088    9481 logs.go:282] 0 containers: []
	W1028 05:15:56.348098    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:15:56.348158    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:15:56.359092    9481 logs.go:282] 1 containers: [e3bf74c67489]
	I1028 05:15:56.359107    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:15:56.359113    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:15:56.393585    9481 logs.go:123] Gathering logs for etcd [e620bd415877] ...
	I1028 05:15:56.393593    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e620bd415877"
	I1028 05:15:56.407960    9481 logs.go:123] Gathering logs for coredns [a96a7852c393] ...
	I1028 05:15:56.407970    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a96a7852c393"
	I1028 05:15:56.425370    9481 logs.go:123] Gathering logs for kube-proxy [0945f87d2543] ...
	I1028 05:15:56.425381    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0945f87d2543"
	I1028 05:15:56.437008    9481 logs.go:123] Gathering logs for kube-controller-manager [0ab7a91f70bd] ...
	I1028 05:15:56.437021    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab7a91f70bd"
	I1028 05:15:56.459783    9481 logs.go:123] Gathering logs for storage-provisioner [e3bf74c67489] ...
	I1028 05:15:56.459795    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3bf74c67489"
	I1028 05:15:56.471424    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:15:56.471433    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:15:56.476097    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:15:56.476105    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:15:56.510956    9481 logs.go:123] Gathering logs for kube-apiserver [220702e4be0e] ...
	I1028 05:15:56.510968    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 220702e4be0e"
	I1028 05:15:56.526078    9481 logs.go:123] Gathering logs for coredns [bc4f56bb0d90] ...
	I1028 05:15:56.526087    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc4f56bb0d90"
	I1028 05:15:56.537947    9481 logs.go:123] Gathering logs for kube-scheduler [704c253f9667] ...
	I1028 05:15:56.537957    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704c253f9667"
	I1028 05:15:56.554053    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:15:56.554066    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:15:56.577074    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:15:56.577082    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:15:59.090867    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:16:04.093210    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:16:04.093609    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:16:04.125343    9481 logs.go:282] 1 containers: [220702e4be0e]
	I1028 05:16:04.125486    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:16:04.143749    9481 logs.go:282] 1 containers: [e620bd415877]
	I1028 05:16:04.143845    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:16:04.158286    9481 logs.go:282] 2 containers: [a96a7852c393 bc4f56bb0d90]
	I1028 05:16:04.158368    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:16:04.170423    9481 logs.go:282] 1 containers: [704c253f9667]
	I1028 05:16:04.170504    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:16:04.181150    9481 logs.go:282] 1 containers: [0945f87d2543]
	I1028 05:16:04.181216    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:16:04.191820    9481 logs.go:282] 1 containers: [0ab7a91f70bd]
	I1028 05:16:04.191900    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:16:04.207193    9481 logs.go:282] 0 containers: []
	W1028 05:16:04.207205    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:16:04.207276    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:16:04.220834    9481 logs.go:282] 1 containers: [e3bf74c67489]
	I1028 05:16:04.220850    9481 logs.go:123] Gathering logs for etcd [e620bd415877] ...
	I1028 05:16:04.220856    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e620bd415877"
	I1028 05:16:04.235073    9481 logs.go:123] Gathering logs for kube-scheduler [704c253f9667] ...
	I1028 05:16:04.235084    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704c253f9667"
	I1028 05:16:04.250125    9481 logs.go:123] Gathering logs for kube-controller-manager [0ab7a91f70bd] ...
	I1028 05:16:04.250136    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab7a91f70bd"
	I1028 05:16:04.267746    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:16:04.267759    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:16:04.303326    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:16:04.303335    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:16:04.307523    9481 logs.go:123] Gathering logs for coredns [a96a7852c393] ...
	I1028 05:16:04.307533    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a96a7852c393"
	I1028 05:16:04.320424    9481 logs.go:123] Gathering logs for coredns [bc4f56bb0d90] ...
	I1028 05:16:04.320436    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc4f56bb0d90"
	I1028 05:16:04.332371    9481 logs.go:123] Gathering logs for kube-proxy [0945f87d2543] ...
	I1028 05:16:04.332384    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0945f87d2543"
	I1028 05:16:04.344380    9481 logs.go:123] Gathering logs for storage-provisioner [e3bf74c67489] ...
	I1028 05:16:04.344390    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3bf74c67489"
	I1028 05:16:04.355854    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:16:04.355866    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:16:04.379864    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:16:04.379870    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:16:04.391143    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:16:04.391155    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:16:04.425108    9481 logs.go:123] Gathering logs for kube-apiserver [220702e4be0e] ...
	I1028 05:16:04.425123    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 220702e4be0e"
	I1028 05:16:06.942822    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:16:11.945502    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:16:11.946049    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:16:11.983475    9481 logs.go:282] 1 containers: [220702e4be0e]
	I1028 05:16:11.983620    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:16:12.002831    9481 logs.go:282] 1 containers: [e620bd415877]
	I1028 05:16:12.002951    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:16:12.017059    9481 logs.go:282] 2 containers: [a96a7852c393 bc4f56bb0d90]
	I1028 05:16:12.017136    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:16:12.032397    9481 logs.go:282] 1 containers: [704c253f9667]
	I1028 05:16:12.032473    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:16:12.047942    9481 logs.go:282] 1 containers: [0945f87d2543]
	I1028 05:16:12.048019    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:16:12.058363    9481 logs.go:282] 1 containers: [0ab7a91f70bd]
	I1028 05:16:12.058444    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:16:12.070274    9481 logs.go:282] 0 containers: []
	W1028 05:16:12.070289    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:16:12.070354    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:16:12.080374    9481 logs.go:282] 1 containers: [e3bf74c67489]
	I1028 05:16:12.080395    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:16:12.080401    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:16:12.116380    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:16:12.116392    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:16:12.150663    9481 logs.go:123] Gathering logs for etcd [e620bd415877] ...
	I1028 05:16:12.150676    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e620bd415877"
	I1028 05:16:12.165102    9481 logs.go:123] Gathering logs for coredns [a96a7852c393] ...
	I1028 05:16:12.165115    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a96a7852c393"
	I1028 05:16:12.177360    9481 logs.go:123] Gathering logs for coredns [bc4f56bb0d90] ...
	I1028 05:16:12.177373    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc4f56bb0d90"
	I1028 05:16:12.189526    9481 logs.go:123] Gathering logs for kube-scheduler [704c253f9667] ...
	I1028 05:16:12.189542    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704c253f9667"
	I1028 05:16:12.204169    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:16:12.204182    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:16:12.227088    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:16:12.227094    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:16:12.238204    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:16:12.238215    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:16:12.243088    9481 logs.go:123] Gathering logs for kube-apiserver [220702e4be0e] ...
	I1028 05:16:12.243094    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 220702e4be0e"
	I1028 05:16:12.257899    9481 logs.go:123] Gathering logs for kube-proxy [0945f87d2543] ...
	I1028 05:16:12.257913    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0945f87d2543"
	I1028 05:16:12.269220    9481 logs.go:123] Gathering logs for kube-controller-manager [0ab7a91f70bd] ...
	I1028 05:16:12.269234    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab7a91f70bd"
	I1028 05:16:12.286293    9481 logs.go:123] Gathering logs for storage-provisioner [e3bf74c67489] ...
	I1028 05:16:12.286303    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3bf74c67489"
	I1028 05:16:14.800022    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:16:19.802862    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:16:19.803399    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:16:19.849887    9481 logs.go:282] 1 containers: [220702e4be0e]
	I1028 05:16:19.850050    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:16:19.869065    9481 logs.go:282] 1 containers: [e620bd415877]
	I1028 05:16:19.869173    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:16:19.883696    9481 logs.go:282] 2 containers: [a96a7852c393 bc4f56bb0d90]
	I1028 05:16:19.883787    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:16:19.895545    9481 logs.go:282] 1 containers: [704c253f9667]
	I1028 05:16:19.895619    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:16:19.906703    9481 logs.go:282] 1 containers: [0945f87d2543]
	I1028 05:16:19.906782    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:16:19.917607    9481 logs.go:282] 1 containers: [0ab7a91f70bd]
	I1028 05:16:19.917688    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:16:19.928000    9481 logs.go:282] 0 containers: []
	W1028 05:16:19.928010    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:16:19.928078    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:16:19.943302    9481 logs.go:282] 1 containers: [e3bf74c67489]
	I1028 05:16:19.943318    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:16:19.943324    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:16:19.978453    9481 logs.go:123] Gathering logs for coredns [bc4f56bb0d90] ...
	I1028 05:16:19.978466    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc4f56bb0d90"
	I1028 05:16:19.990366    9481 logs.go:123] Gathering logs for kube-scheduler [704c253f9667] ...
	I1028 05:16:19.990379    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704c253f9667"
	I1028 05:16:20.005275    9481 logs.go:123] Gathering logs for kube-controller-manager [0ab7a91f70bd] ...
	I1028 05:16:20.005287    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab7a91f70bd"
	I1028 05:16:20.022471    9481 logs.go:123] Gathering logs for storage-provisioner [e3bf74c67489] ...
	I1028 05:16:20.022482    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3bf74c67489"
	I1028 05:16:20.034597    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:16:20.034609    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:16:20.059488    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:16:20.059495    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:16:20.094885    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:16:20.094893    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:16:20.099479    9481 logs.go:123] Gathering logs for coredns [a96a7852c393] ...
	I1028 05:16:20.099488    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a96a7852c393"
	I1028 05:16:20.111889    9481 logs.go:123] Gathering logs for kube-proxy [0945f87d2543] ...
	I1028 05:16:20.111900    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0945f87d2543"
	I1028 05:16:20.123922    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:16:20.123932    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:16:20.134922    9481 logs.go:123] Gathering logs for kube-apiserver [220702e4be0e] ...
	I1028 05:16:20.134933    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 220702e4be0e"
	I1028 05:16:20.149951    9481 logs.go:123] Gathering logs for etcd [e620bd415877] ...
	I1028 05:16:20.149962    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e620bd415877"
	I1028 05:16:22.666661    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:16:27.669184    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:16:27.669614    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:16:27.716281    9481 logs.go:282] 1 containers: [220702e4be0e]
	I1028 05:16:27.716405    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:16:27.733501    9481 logs.go:282] 1 containers: [e620bd415877]
	I1028 05:16:27.733581    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:16:27.746546    9481 logs.go:282] 2 containers: [a96a7852c393 bc4f56bb0d90]
	I1028 05:16:27.746629    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:16:27.757820    9481 logs.go:282] 1 containers: [704c253f9667]
	I1028 05:16:27.757890    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:16:27.772458    9481 logs.go:282] 1 containers: [0945f87d2543]
	I1028 05:16:27.772545    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:16:27.782692    9481 logs.go:282] 1 containers: [0ab7a91f70bd]
	I1028 05:16:27.782760    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:16:27.792711    9481 logs.go:282] 0 containers: []
	W1028 05:16:27.792720    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:16:27.792774    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:16:27.803196    9481 logs.go:282] 1 containers: [e3bf74c67489]
	I1028 05:16:27.803215    9481 logs.go:123] Gathering logs for etcd [e620bd415877] ...
	I1028 05:16:27.803221    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e620bd415877"
	I1028 05:16:27.817037    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:16:27.817048    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:16:27.821522    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:16:27.821531    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:16:27.857943    9481 logs.go:123] Gathering logs for coredns [a96a7852c393] ...
	I1028 05:16:27.857955    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a96a7852c393"
	I1028 05:16:27.871534    9481 logs.go:123] Gathering logs for coredns [bc4f56bb0d90] ...
	I1028 05:16:27.871547    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc4f56bb0d90"
	I1028 05:16:27.885614    9481 logs.go:123] Gathering logs for kube-scheduler [704c253f9667] ...
	I1028 05:16:27.885626    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704c253f9667"
	I1028 05:16:27.901452    9481 logs.go:123] Gathering logs for kube-proxy [0945f87d2543] ...
	I1028 05:16:27.901462    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0945f87d2543"
	I1028 05:16:27.920708    9481 logs.go:123] Gathering logs for kube-controller-manager [0ab7a91f70bd] ...
	I1028 05:16:27.920721    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab7a91f70bd"
	I1028 05:16:27.940066    9481 logs.go:123] Gathering logs for storage-provisioner [e3bf74c67489] ...
	I1028 05:16:27.940076    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3bf74c67489"
	I1028 05:16:27.953770    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:16:27.953780    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:16:27.989835    9481 logs.go:123] Gathering logs for kube-apiserver [220702e4be0e] ...
	I1028 05:16:27.989843    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 220702e4be0e"
	I1028 05:16:28.004051    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:16:28.004063    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:16:28.027180    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:16:28.027188    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:16:30.540133    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:16:35.542345    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:16:35.542587    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:16:35.573616    9481 logs.go:282] 1 containers: [220702e4be0e]
	I1028 05:16:35.573744    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:16:35.588234    9481 logs.go:282] 1 containers: [e620bd415877]
	I1028 05:16:35.588323    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:16:35.600604    9481 logs.go:282] 2 containers: [a96a7852c393 bc4f56bb0d90]
	I1028 05:16:35.600669    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:16:35.611080    9481 logs.go:282] 1 containers: [704c253f9667]
	I1028 05:16:35.611156    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:16:35.621657    9481 logs.go:282] 1 containers: [0945f87d2543]
	I1028 05:16:35.621733    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:16:35.632147    9481 logs.go:282] 1 containers: [0ab7a91f70bd]
	I1028 05:16:35.632221    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:16:35.642320    9481 logs.go:282] 0 containers: []
	W1028 05:16:35.642336    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:16:35.642395    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:16:35.652629    9481 logs.go:282] 1 containers: [e3bf74c67489]
	I1028 05:16:35.652644    9481 logs.go:123] Gathering logs for kube-scheduler [704c253f9667] ...
	I1028 05:16:35.652649    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704c253f9667"
	I1028 05:16:35.667142    9481 logs.go:123] Gathering logs for kube-controller-manager [0ab7a91f70bd] ...
	I1028 05:16:35.667155    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab7a91f70bd"
	I1028 05:16:35.690498    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:16:35.690513    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:16:35.726583    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:16:35.726592    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:16:35.759704    9481 logs.go:123] Gathering logs for kube-apiserver [220702e4be0e] ...
	I1028 05:16:35.759718    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 220702e4be0e"
	I1028 05:16:35.773906    9481 logs.go:123] Gathering logs for etcd [e620bd415877] ...
	I1028 05:16:35.773916    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e620bd415877"
	I1028 05:16:35.790611    9481 logs.go:123] Gathering logs for storage-provisioner [e3bf74c67489] ...
	I1028 05:16:35.790623    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3bf74c67489"
	I1028 05:16:35.802429    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:16:35.802442    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:16:35.826107    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:16:35.826116    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:16:35.836801    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:16:35.836813    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:16:35.841121    9481 logs.go:123] Gathering logs for coredns [a96a7852c393] ...
	I1028 05:16:35.841126    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a96a7852c393"
	I1028 05:16:35.852221    9481 logs.go:123] Gathering logs for coredns [bc4f56bb0d90] ...
	I1028 05:16:35.852230    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc4f56bb0d90"
	I1028 05:16:35.864634    9481 logs.go:123] Gathering logs for kube-proxy [0945f87d2543] ...
	I1028 05:16:35.864645    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0945f87d2543"
	I1028 05:16:38.377472    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:16:43.379717    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:16:43.379928    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:16:43.396559    9481 logs.go:282] 1 containers: [220702e4be0e]
	I1028 05:16:43.396643    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:16:43.408888    9481 logs.go:282] 1 containers: [e620bd415877]
	I1028 05:16:43.408959    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:16:43.419065    9481 logs.go:282] 2 containers: [a96a7852c393 bc4f56bb0d90]
	I1028 05:16:43.419139    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:16:43.429527    9481 logs.go:282] 1 containers: [704c253f9667]
	I1028 05:16:43.429593    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:16:43.440450    9481 logs.go:282] 1 containers: [0945f87d2543]
	I1028 05:16:43.440528    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:16:43.450579    9481 logs.go:282] 1 containers: [0ab7a91f70bd]
	I1028 05:16:43.450647    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:16:43.460778    9481 logs.go:282] 0 containers: []
	W1028 05:16:43.460792    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:16:43.460852    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:16:43.472352    9481 logs.go:282] 1 containers: [e3bf74c67489]
	I1028 05:16:43.472369    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:16:43.472374    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:16:43.506366    9481 logs.go:123] Gathering logs for kube-apiserver [220702e4be0e] ...
	I1028 05:16:43.506376    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 220702e4be0e"
	I1028 05:16:43.520251    9481 logs.go:123] Gathering logs for coredns [a96a7852c393] ...
	I1028 05:16:43.520263    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a96a7852c393"
	I1028 05:16:43.531651    9481 logs.go:123] Gathering logs for coredns [bc4f56bb0d90] ...
	I1028 05:16:43.531664    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc4f56bb0d90"
	I1028 05:16:43.543545    9481 logs.go:123] Gathering logs for kube-scheduler [704c253f9667] ...
	I1028 05:16:43.543558    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704c253f9667"
	I1028 05:16:43.558252    9481 logs.go:123] Gathering logs for kube-proxy [0945f87d2543] ...
	I1028 05:16:43.558264    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0945f87d2543"
	I1028 05:16:43.570091    9481 logs.go:123] Gathering logs for kube-controller-manager [0ab7a91f70bd] ...
	I1028 05:16:43.570103    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab7a91f70bd"
	I1028 05:16:43.588010    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:16:43.588020    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:16:43.611133    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:16:43.611139    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:16:43.615344    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:16:43.615352    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:16:43.649246    9481 logs.go:123] Gathering logs for etcd [e620bd415877] ...
	I1028 05:16:43.649256    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e620bd415877"
	I1028 05:16:43.663166    9481 logs.go:123] Gathering logs for storage-provisioner [e3bf74c67489] ...
	I1028 05:16:43.663176    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3bf74c67489"
	I1028 05:16:43.674290    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:16:43.674304    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:16:46.189023    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:16:51.190006    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:16:51.190093    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:16:51.202786    9481 logs.go:282] 1 containers: [220702e4be0e]
	I1028 05:16:51.202864    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:16:51.214240    9481 logs.go:282] 1 containers: [e620bd415877]
	I1028 05:16:51.214301    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:16:51.232738    9481 logs.go:282] 4 containers: [e9ce8489fdc0 8dc59ee2997a a96a7852c393 bc4f56bb0d90]
	I1028 05:16:51.232825    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:16:51.244089    9481 logs.go:282] 1 containers: [704c253f9667]
	I1028 05:16:51.244154    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:16:51.255953    9481 logs.go:282] 1 containers: [0945f87d2543]
	I1028 05:16:51.256016    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:16:51.267191    9481 logs.go:282] 1 containers: [0ab7a91f70bd]
	I1028 05:16:51.267272    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:16:51.277591    9481 logs.go:282] 0 containers: []
	W1028 05:16:51.277604    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:16:51.277658    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:16:51.289030    9481 logs.go:282] 1 containers: [e3bf74c67489]
	I1028 05:16:51.289085    9481 logs.go:123] Gathering logs for kube-proxy [0945f87d2543] ...
	I1028 05:16:51.289094    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0945f87d2543"
	I1028 05:16:51.301306    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:16:51.301318    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:16:51.326287    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:16:51.326306    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:16:51.331218    9481 logs.go:123] Gathering logs for kube-apiserver [220702e4be0e] ...
	I1028 05:16:51.331230    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 220702e4be0e"
	I1028 05:16:51.347091    9481 logs.go:123] Gathering logs for coredns [8dc59ee2997a] ...
	I1028 05:16:51.347104    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc59ee2997a"
	I1028 05:16:51.359130    9481 logs.go:123] Gathering logs for coredns [bc4f56bb0d90] ...
	I1028 05:16:51.359142    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc4f56bb0d90"
	I1028 05:16:51.372939    9481 logs.go:123] Gathering logs for kube-controller-manager [0ab7a91f70bd] ...
	I1028 05:16:51.372950    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab7a91f70bd"
	I1028 05:16:51.391533    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:16:51.391552    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:16:51.429886    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:16:51.429902    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:16:51.471016    9481 logs.go:123] Gathering logs for etcd [e620bd415877] ...
	I1028 05:16:51.471028    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e620bd415877"
	I1028 05:16:51.487163    9481 logs.go:123] Gathering logs for coredns [e9ce8489fdc0] ...
	I1028 05:16:51.487175    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ce8489fdc0"
	I1028 05:16:51.500740    9481 logs.go:123] Gathering logs for coredns [a96a7852c393] ...
	I1028 05:16:51.500748    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a96a7852c393"
	I1028 05:16:51.512277    9481 logs.go:123] Gathering logs for kube-scheduler [704c253f9667] ...
	I1028 05:16:51.512286    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704c253f9667"
	I1028 05:16:51.528433    9481 logs.go:123] Gathering logs for storage-provisioner [e3bf74c67489] ...
	I1028 05:16:51.528444    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3bf74c67489"
	I1028 05:16:51.541428    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:16:51.541442    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:16:54.058437    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:16:59.060206    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:16:59.060807    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:16:59.101134    9481 logs.go:282] 1 containers: [220702e4be0e]
	I1028 05:16:59.101292    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:16:59.124201    9481 logs.go:282] 1 containers: [e620bd415877]
	I1028 05:16:59.124328    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:16:59.143716    9481 logs.go:282] 4 containers: [e9ce8489fdc0 8dc59ee2997a a96a7852c393 bc4f56bb0d90]
	I1028 05:16:59.143800    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:16:59.156470    9481 logs.go:282] 1 containers: [704c253f9667]
	I1028 05:16:59.156549    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:16:59.167080    9481 logs.go:282] 1 containers: [0945f87d2543]
	I1028 05:16:59.167155    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:16:59.179783    9481 logs.go:282] 1 containers: [0ab7a91f70bd]
	I1028 05:16:59.179857    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:16:59.189857    9481 logs.go:282] 0 containers: []
	W1028 05:16:59.189868    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:16:59.189930    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:16:59.207121    9481 logs.go:282] 1 containers: [e3bf74c67489]
	I1028 05:16:59.207145    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:16:59.207151    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:16:59.211700    9481 logs.go:123] Gathering logs for storage-provisioner [e3bf74c67489] ...
	I1028 05:16:59.211706    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3bf74c67489"
	I1028 05:16:59.223082    9481 logs.go:123] Gathering logs for kube-controller-manager [0ab7a91f70bd] ...
	I1028 05:16:59.223092    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab7a91f70bd"
	I1028 05:16:59.240725    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:16:59.240738    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:16:59.252788    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:16:59.252800    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:16:59.286888    9481 logs.go:123] Gathering logs for kube-apiserver [220702e4be0e] ...
	I1028 05:16:59.286895    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 220702e4be0e"
	I1028 05:16:59.301465    9481 logs.go:123] Gathering logs for etcd [e620bd415877] ...
	I1028 05:16:59.301476    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e620bd415877"
	I1028 05:16:59.315910    9481 logs.go:123] Gathering logs for coredns [e9ce8489fdc0] ...
	I1028 05:16:59.315920    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ce8489fdc0"
	I1028 05:16:59.327296    9481 logs.go:123] Gathering logs for coredns [8dc59ee2997a] ...
	I1028 05:16:59.327307    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc59ee2997a"
	I1028 05:16:59.338597    9481 logs.go:123] Gathering logs for kube-scheduler [704c253f9667] ...
	I1028 05:16:59.338606    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704c253f9667"
	I1028 05:16:59.354125    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:16:59.354138    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:16:59.378419    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:16:59.378428    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:16:59.413573    9481 logs.go:123] Gathering logs for coredns [a96a7852c393] ...
	I1028 05:16:59.413587    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a96a7852c393"
	I1028 05:16:59.425485    9481 logs.go:123] Gathering logs for coredns [bc4f56bb0d90] ...
	I1028 05:16:59.425495    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc4f56bb0d90"
	I1028 05:16:59.441217    9481 logs.go:123] Gathering logs for kube-proxy [0945f87d2543] ...
	I1028 05:16:59.441232    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0945f87d2543"
	I1028 05:17:01.955592    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:17:06.957985    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:17:06.958479    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:17:06.992181    9481 logs.go:282] 1 containers: [220702e4be0e]
	I1028 05:17:06.992321    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:17:07.011264    9481 logs.go:282] 1 containers: [e620bd415877]
	I1028 05:17:07.011372    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:17:07.025617    9481 logs.go:282] 4 containers: [e9ce8489fdc0 8dc59ee2997a a96a7852c393 bc4f56bb0d90]
	I1028 05:17:07.025706    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:17:07.038038    9481 logs.go:282] 1 containers: [704c253f9667]
	I1028 05:17:07.038116    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:17:07.049096    9481 logs.go:282] 1 containers: [0945f87d2543]
	I1028 05:17:07.049177    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:17:07.059703    9481 logs.go:282] 1 containers: [0ab7a91f70bd]
	I1028 05:17:07.059777    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:17:07.070146    9481 logs.go:282] 0 containers: []
	W1028 05:17:07.070157    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:17:07.070224    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:17:07.080704    9481 logs.go:282] 1 containers: [e3bf74c67489]
	I1028 05:17:07.080719    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:17:07.080724    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:17:07.115852    9481 logs.go:123] Gathering logs for kube-apiserver [220702e4be0e] ...
	I1028 05:17:07.115863    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 220702e4be0e"
	I1028 05:17:07.131098    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:17:07.131110    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:17:07.142635    9481 logs.go:123] Gathering logs for storage-provisioner [e3bf74c67489] ...
	I1028 05:17:07.142647    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3bf74c67489"
	I1028 05:17:07.154879    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:17:07.154892    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:17:07.190168    9481 logs.go:123] Gathering logs for etcd [e620bd415877] ...
	I1028 05:17:07.190179    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e620bd415877"
	I1028 05:17:07.204365    9481 logs.go:123] Gathering logs for coredns [e9ce8489fdc0] ...
	I1028 05:17:07.204373    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ce8489fdc0"
	I1028 05:17:07.216045    9481 logs.go:123] Gathering logs for coredns [8dc59ee2997a] ...
	I1028 05:17:07.216058    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc59ee2997a"
	I1028 05:17:07.227964    9481 logs.go:123] Gathering logs for coredns [bc4f56bb0d90] ...
	I1028 05:17:07.227976    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc4f56bb0d90"
	I1028 05:17:07.239573    9481 logs.go:123] Gathering logs for kube-scheduler [704c253f9667] ...
	I1028 05:17:07.239586    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704c253f9667"
	I1028 05:17:07.254871    9481 logs.go:123] Gathering logs for kube-proxy [0945f87d2543] ...
	I1028 05:17:07.254883    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0945f87d2543"
	I1028 05:17:07.266618    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:17:07.266629    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:17:07.290026    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:17:07.290033    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:17:07.294519    9481 logs.go:123] Gathering logs for coredns [a96a7852c393] ...
	I1028 05:17:07.294527    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a96a7852c393"
	I1028 05:17:07.306140    9481 logs.go:123] Gathering logs for kube-controller-manager [0ab7a91f70bd] ...
	I1028 05:17:07.306149    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab7a91f70bd"
	I1028 05:17:09.825846    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:17:14.828073    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:17:14.828603    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:17:14.868476    9481 logs.go:282] 1 containers: [220702e4be0e]
	I1028 05:17:14.868610    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:17:14.890831    9481 logs.go:282] 1 containers: [e620bd415877]
	I1028 05:17:14.890936    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:17:14.906658    9481 logs.go:282] 4 containers: [e9ce8489fdc0 8dc59ee2997a a96a7852c393 bc4f56bb0d90]
	I1028 05:17:14.906767    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:17:14.919174    9481 logs.go:282] 1 containers: [704c253f9667]
	I1028 05:17:14.919250    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:17:14.930798    9481 logs.go:282] 1 containers: [0945f87d2543]
	I1028 05:17:14.930874    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:17:14.941905    9481 logs.go:282] 1 containers: [0ab7a91f70bd]
	I1028 05:17:14.941984    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:17:14.954522    9481 logs.go:282] 0 containers: []
	W1028 05:17:14.954535    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:17:14.954588    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:17:14.966735    9481 logs.go:282] 1 containers: [e3bf74c67489]
	I1028 05:17:14.966758    9481 logs.go:123] Gathering logs for etcd [e620bd415877] ...
	I1028 05:17:14.966764    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e620bd415877"
	I1028 05:17:14.981410    9481 logs.go:123] Gathering logs for coredns [e9ce8489fdc0] ...
	I1028 05:17:14.981425    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ce8489fdc0"
	I1028 05:17:14.993684    9481 logs.go:123] Gathering logs for coredns [bc4f56bb0d90] ...
	I1028 05:17:14.993698    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc4f56bb0d90"
	I1028 05:17:15.006981    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:17:15.006991    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:17:15.011805    9481 logs.go:123] Gathering logs for kube-apiserver [220702e4be0e] ...
	I1028 05:17:15.011815    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 220702e4be0e"
	I1028 05:17:15.027507    9481 logs.go:123] Gathering logs for kube-controller-manager [0ab7a91f70bd] ...
	I1028 05:17:15.027518    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab7a91f70bd"
	I1028 05:17:15.045730    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:17:15.045743    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:17:15.071698    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:17:15.071712    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:17:15.108300    9481 logs.go:123] Gathering logs for coredns [8dc59ee2997a] ...
	I1028 05:17:15.108312    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc59ee2997a"
	I1028 05:17:15.121090    9481 logs.go:123] Gathering logs for kube-proxy [0945f87d2543] ...
	I1028 05:17:15.121104    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0945f87d2543"
	I1028 05:17:15.133542    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:17:15.133554    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:17:15.169973    9481 logs.go:123] Gathering logs for kube-scheduler [704c253f9667] ...
	I1028 05:17:15.169992    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704c253f9667"
	I1028 05:17:15.185732    9481 logs.go:123] Gathering logs for storage-provisioner [e3bf74c67489] ...
	I1028 05:17:15.185747    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3bf74c67489"
	I1028 05:17:15.198641    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:17:15.198653    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:17:15.211018    9481 logs.go:123] Gathering logs for coredns [a96a7852c393] ...
	I1028 05:17:15.211028    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a96a7852c393"
	I1028 05:17:17.727099    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:17:22.729258    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:17:22.729570    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:17:22.754231    9481 logs.go:282] 1 containers: [220702e4be0e]
	I1028 05:17:22.754351    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:17:22.772523    9481 logs.go:282] 1 containers: [e620bd415877]
	I1028 05:17:22.772611    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:17:22.784954    9481 logs.go:282] 4 containers: [e9ce8489fdc0 8dc59ee2997a a96a7852c393 bc4f56bb0d90]
	I1028 05:17:22.785024    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:17:22.795440    9481 logs.go:282] 1 containers: [704c253f9667]
	I1028 05:17:22.795505    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:17:22.805890    9481 logs.go:282] 1 containers: [0945f87d2543]
	I1028 05:17:22.805962    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:17:22.816339    9481 logs.go:282] 1 containers: [0ab7a91f70bd]
	I1028 05:17:22.816418    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:17:22.825916    9481 logs.go:282] 0 containers: []
	W1028 05:17:22.825927    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:17:22.825983    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:17:22.836080    9481 logs.go:282] 1 containers: [e3bf74c67489]
	I1028 05:17:22.836097    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:17:22.836102    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:17:22.860805    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:17:22.860814    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:17:22.896851    9481 logs.go:123] Gathering logs for storage-provisioner [e3bf74c67489] ...
	I1028 05:17:22.896861    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3bf74c67489"
	I1028 05:17:22.915392    9481 logs.go:123] Gathering logs for kube-scheduler [704c253f9667] ...
	I1028 05:17:22.915405    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704c253f9667"
	I1028 05:17:22.930229    9481 logs.go:123] Gathering logs for kube-proxy [0945f87d2543] ...
	I1028 05:17:22.930242    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0945f87d2543"
	I1028 05:17:22.941906    9481 logs.go:123] Gathering logs for kube-controller-manager [0ab7a91f70bd] ...
	I1028 05:17:22.941917    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab7a91f70bd"
	I1028 05:17:22.962026    9481 logs.go:123] Gathering logs for etcd [e620bd415877] ...
	I1028 05:17:22.962035    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e620bd415877"
	I1028 05:17:22.978369    9481 logs.go:123] Gathering logs for coredns [e9ce8489fdc0] ...
	I1028 05:17:22.978379    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ce8489fdc0"
	I1028 05:17:22.991936    9481 logs.go:123] Gathering logs for kube-apiserver [220702e4be0e] ...
	I1028 05:17:22.991947    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 220702e4be0e"
	I1028 05:17:23.007537    9481 logs.go:123] Gathering logs for coredns [8dc59ee2997a] ...
	I1028 05:17:23.007549    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc59ee2997a"
	I1028 05:17:23.020808    9481 logs.go:123] Gathering logs for coredns [a96a7852c393] ...
	I1028 05:17:23.020822    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a96a7852c393"
	I1028 05:17:23.032441    9481 logs.go:123] Gathering logs for coredns [bc4f56bb0d90] ...
	I1028 05:17:23.032455    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc4f56bb0d90"
	I1028 05:17:23.043923    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:17:23.043937    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:17:23.055992    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:17:23.056005    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:17:23.060040    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:17:23.060049    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:17:25.596061    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:17:30.598324    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:17:30.598527    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:17:30.621984    9481 logs.go:282] 1 containers: [220702e4be0e]
	I1028 05:17:30.622115    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:17:30.637437    9481 logs.go:282] 1 containers: [e620bd415877]
	I1028 05:17:30.637522    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:17:30.650132    9481 logs.go:282] 4 containers: [e9ce8489fdc0 8dc59ee2997a a96a7852c393 bc4f56bb0d90]
	I1028 05:17:30.650215    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:17:30.660634    9481 logs.go:282] 1 containers: [704c253f9667]
	I1028 05:17:30.660714    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:17:30.670992    9481 logs.go:282] 1 containers: [0945f87d2543]
	I1028 05:17:30.671060    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:17:30.681464    9481 logs.go:282] 1 containers: [0ab7a91f70bd]
	I1028 05:17:30.681542    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:17:30.691496    9481 logs.go:282] 0 containers: []
	W1028 05:17:30.691505    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:17:30.691568    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:17:30.703122    9481 logs.go:282] 1 containers: [e3bf74c67489]
	I1028 05:17:30.703140    9481 logs.go:123] Gathering logs for coredns [e9ce8489fdc0] ...
	I1028 05:17:30.703145    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ce8489fdc0"
	I1028 05:17:30.715818    9481 logs.go:123] Gathering logs for kube-controller-manager [0ab7a91f70bd] ...
	I1028 05:17:30.715828    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab7a91f70bd"
	I1028 05:17:30.732902    9481 logs.go:123] Gathering logs for storage-provisioner [e3bf74c67489] ...
	I1028 05:17:30.732912    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3bf74c67489"
	I1028 05:17:30.744090    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:17:30.744100    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:17:30.780277    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:17:30.780285    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:17:30.784850    9481 logs.go:123] Gathering logs for etcd [e620bd415877] ...
	I1028 05:17:30.784858    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e620bd415877"
	I1028 05:17:30.801383    9481 logs.go:123] Gathering logs for coredns [8dc59ee2997a] ...
	I1028 05:17:30.801393    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc59ee2997a"
	I1028 05:17:30.812959    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:17:30.812972    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:17:30.825810    9481 logs.go:123] Gathering logs for kube-apiserver [220702e4be0e] ...
	I1028 05:17:30.825822    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 220702e4be0e"
	I1028 05:17:30.840841    9481 logs.go:123] Gathering logs for coredns [a96a7852c393] ...
	I1028 05:17:30.840854    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a96a7852c393"
	I1028 05:17:30.852477    9481 logs.go:123] Gathering logs for kube-proxy [0945f87d2543] ...
	I1028 05:17:30.852491    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0945f87d2543"
	I1028 05:17:30.863682    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:17:30.863695    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:17:30.887580    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:17:30.887587    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:17:30.930360    9481 logs.go:123] Gathering logs for coredns [bc4f56bb0d90] ...
	I1028 05:17:30.930374    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc4f56bb0d90"
	I1028 05:17:30.946024    9481 logs.go:123] Gathering logs for kube-scheduler [704c253f9667] ...
	I1028 05:17:30.946038    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704c253f9667"
	I1028 05:17:33.463077    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:17:38.465878    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:17:38.466464    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:17:38.506105    9481 logs.go:282] 1 containers: [220702e4be0e]
	I1028 05:17:38.506264    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:17:38.527360    9481 logs.go:282] 1 containers: [e620bd415877]
	I1028 05:17:38.527491    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:17:38.543295    9481 logs.go:282] 4 containers: [e9ce8489fdc0 8dc59ee2997a a96a7852c393 bc4f56bb0d90]
	I1028 05:17:38.543373    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:17:38.555690    9481 logs.go:282] 1 containers: [704c253f9667]
	I1028 05:17:38.555777    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:17:38.566401    9481 logs.go:282] 1 containers: [0945f87d2543]
	I1028 05:17:38.566474    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:17:38.576823    9481 logs.go:282] 1 containers: [0ab7a91f70bd]
	I1028 05:17:38.576901    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:17:38.587539    9481 logs.go:282] 0 containers: []
	W1028 05:17:38.587550    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:17:38.587619    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:17:38.598570    9481 logs.go:282] 1 containers: [e3bf74c67489]
	I1028 05:17:38.598588    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:17:38.598593    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:17:38.602832    9481 logs.go:123] Gathering logs for kube-apiserver [220702e4be0e] ...
	I1028 05:17:38.602840    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 220702e4be0e"
	I1028 05:17:38.616770    9481 logs.go:123] Gathering logs for kube-controller-manager [0ab7a91f70bd] ...
	I1028 05:17:38.616778    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab7a91f70bd"
	I1028 05:17:38.634625    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:17:38.634636    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:17:38.669734    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:17:38.669741    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:17:38.703538    9481 logs.go:123] Gathering logs for etcd [e620bd415877] ...
	I1028 05:17:38.703548    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e620bd415877"
	I1028 05:17:38.717158    9481 logs.go:123] Gathering logs for coredns [e9ce8489fdc0] ...
	I1028 05:17:38.717169    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ce8489fdc0"
	I1028 05:17:38.729539    9481 logs.go:123] Gathering logs for coredns [a96a7852c393] ...
	I1028 05:17:38.729553    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a96a7852c393"
	I1028 05:17:38.741667    9481 logs.go:123] Gathering logs for coredns [8dc59ee2997a] ...
	I1028 05:17:38.741676    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc59ee2997a"
	I1028 05:17:38.759097    9481 logs.go:123] Gathering logs for kube-scheduler [704c253f9667] ...
	I1028 05:17:38.759108    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704c253f9667"
	I1028 05:17:38.773770    9481 logs.go:123] Gathering logs for storage-provisioner [e3bf74c67489] ...
	I1028 05:17:38.773783    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3bf74c67489"
	I1028 05:17:38.785707    9481 logs.go:123] Gathering logs for coredns [bc4f56bb0d90] ...
	I1028 05:17:38.785716    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc4f56bb0d90"
	I1028 05:17:38.797432    9481 logs.go:123] Gathering logs for kube-proxy [0945f87d2543] ...
	I1028 05:17:38.797441    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0945f87d2543"
	I1028 05:17:38.809354    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:17:38.809365    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:17:38.834195    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:17:38.834204    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:17:41.348345    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:17:46.350441    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:17:46.350709    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:17:46.375665    9481 logs.go:282] 1 containers: [220702e4be0e]
	I1028 05:17:46.375798    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:17:46.393552    9481 logs.go:282] 1 containers: [e620bd415877]
	I1028 05:17:46.393637    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:17:46.407132    9481 logs.go:282] 4 containers: [e9ce8489fdc0 8dc59ee2997a a96a7852c393 bc4f56bb0d90]
	I1028 05:17:46.407222    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:17:46.417776    9481 logs.go:282] 1 containers: [704c253f9667]
	I1028 05:17:46.417843    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:17:46.428175    9481 logs.go:282] 1 containers: [0945f87d2543]
	I1028 05:17:46.428249    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:17:46.438604    9481 logs.go:282] 1 containers: [0ab7a91f70bd]
	I1028 05:17:46.438678    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:17:46.453021    9481 logs.go:282] 0 containers: []
	W1028 05:17:46.453038    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:17:46.453093    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:17:46.463279    9481 logs.go:282] 1 containers: [e3bf74c67489]
	I1028 05:17:46.463302    9481 logs.go:123] Gathering logs for kube-proxy [0945f87d2543] ...
	I1028 05:17:46.463308    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0945f87d2543"
	I1028 05:17:46.475300    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:17:46.475310    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:17:46.487375    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:17:46.487387    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:17:46.521289    9481 logs.go:123] Gathering logs for kube-apiserver [220702e4be0e] ...
	I1028 05:17:46.521299    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 220702e4be0e"
	I1028 05:17:46.538383    9481 logs.go:123] Gathering logs for coredns [a96a7852c393] ...
	I1028 05:17:46.538396    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a96a7852c393"
	I1028 05:17:46.554626    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:17:46.554639    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:17:46.558861    9481 logs.go:123] Gathering logs for coredns [8dc59ee2997a] ...
	I1028 05:17:46.558871    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc59ee2997a"
	I1028 05:17:46.570579    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:17:46.570592    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:17:46.594902    9481 logs.go:123] Gathering logs for coredns [bc4f56bb0d90] ...
	I1028 05:17:46.594911    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc4f56bb0d90"
	I1028 05:17:46.607046    9481 logs.go:123] Gathering logs for kube-scheduler [704c253f9667] ...
	I1028 05:17:46.607059    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704c253f9667"
	I1028 05:17:46.624118    9481 logs.go:123] Gathering logs for kube-controller-manager [0ab7a91f70bd] ...
	I1028 05:17:46.624128    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab7a91f70bd"
	I1028 05:17:46.642525    9481 logs.go:123] Gathering logs for storage-provisioner [e3bf74c67489] ...
	I1028 05:17:46.642536    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3bf74c67489"
	I1028 05:17:46.654046    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:17:46.654062    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:17:46.689201    9481 logs.go:123] Gathering logs for etcd [e620bd415877] ...
	I1028 05:17:46.689217    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e620bd415877"
	I1028 05:17:46.703639    9481 logs.go:123] Gathering logs for coredns [e9ce8489fdc0] ...
	I1028 05:17:46.703653    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ce8489fdc0"
	I1028 05:17:49.218138    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:17:54.219846    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:17:54.219964    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:17:54.232803    9481 logs.go:282] 1 containers: [220702e4be0e]
	I1028 05:17:54.232887    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:17:54.245367    9481 logs.go:282] 1 containers: [e620bd415877]
	I1028 05:17:54.245453    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:17:54.258034    9481 logs.go:282] 4 containers: [e9ce8489fdc0 8dc59ee2997a a96a7852c393 bc4f56bb0d90]
	I1028 05:17:54.258124    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:17:54.270553    9481 logs.go:282] 1 containers: [704c253f9667]
	I1028 05:17:54.270628    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:17:54.283972    9481 logs.go:282] 1 containers: [0945f87d2543]
	I1028 05:17:54.284060    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:17:54.296656    9481 logs.go:282] 1 containers: [0ab7a91f70bd]
	I1028 05:17:54.296744    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:17:54.314496    9481 logs.go:282] 0 containers: []
	W1028 05:17:54.314508    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:17:54.314569    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:17:54.327147    9481 logs.go:282] 1 containers: [e3bf74c67489]
	I1028 05:17:54.327167    9481 logs.go:123] Gathering logs for coredns [a96a7852c393] ...
	I1028 05:17:54.327173    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a96a7852c393"
	I1028 05:17:54.345856    9481 logs.go:123] Gathering logs for kube-scheduler [704c253f9667] ...
	I1028 05:17:54.345864    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704c253f9667"
	I1028 05:17:54.361084    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:17:54.361099    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:17:54.372985    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:17:54.372999    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:17:54.408460    9481 logs.go:123] Gathering logs for kube-apiserver [220702e4be0e] ...
	I1028 05:17:54.408472    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 220702e4be0e"
	I1028 05:17:54.423354    9481 logs.go:123] Gathering logs for coredns [bc4f56bb0d90] ...
	I1028 05:17:54.423364    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc4f56bb0d90"
	I1028 05:17:54.438658    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:17:54.438671    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:17:54.463057    9481 logs.go:123] Gathering logs for storage-provisioner [e3bf74c67489] ...
	I1028 05:17:54.463065    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3bf74c67489"
	I1028 05:17:54.474744    9481 logs.go:123] Gathering logs for etcd [e620bd415877] ...
	I1028 05:17:54.474754    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e620bd415877"
	I1028 05:17:54.488108    9481 logs.go:123] Gathering logs for coredns [e9ce8489fdc0] ...
	I1028 05:17:54.488118    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ce8489fdc0"
	I1028 05:17:54.499796    9481 logs.go:123] Gathering logs for coredns [8dc59ee2997a] ...
	I1028 05:17:54.499806    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc59ee2997a"
	I1028 05:17:54.511283    9481 logs.go:123] Gathering logs for kube-controller-manager [0ab7a91f70bd] ...
	I1028 05:17:54.511293    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab7a91f70bd"
	I1028 05:17:54.528906    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:17:54.528918    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:17:54.565183    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:17:54.565191    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:17:54.569582    9481 logs.go:123] Gathering logs for kube-proxy [0945f87d2543] ...
	I1028 05:17:54.569588    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0945f87d2543"
	I1028 05:17:57.082934    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:18:02.085565    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:18:02.085751    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:18:02.102396    9481 logs.go:282] 1 containers: [220702e4be0e]
	I1028 05:18:02.102477    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:18:02.114425    9481 logs.go:282] 1 containers: [e620bd415877]
	I1028 05:18:02.114497    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:18:02.125267    9481 logs.go:282] 4 containers: [e9ce8489fdc0 8dc59ee2997a a96a7852c393 bc4f56bb0d90]
	I1028 05:18:02.125346    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:18:02.136343    9481 logs.go:282] 1 containers: [704c253f9667]
	I1028 05:18:02.136412    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:18:02.146645    9481 logs.go:282] 1 containers: [0945f87d2543]
	I1028 05:18:02.146721    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:18:02.161010    9481 logs.go:282] 1 containers: [0ab7a91f70bd]
	I1028 05:18:02.161076    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:18:02.171389    9481 logs.go:282] 0 containers: []
	W1028 05:18:02.171400    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:18:02.171461    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:18:02.181967    9481 logs.go:282] 1 containers: [e3bf74c67489]
	I1028 05:18:02.181986    9481 logs.go:123] Gathering logs for kube-apiserver [220702e4be0e] ...
	I1028 05:18:02.181992    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 220702e4be0e"
	I1028 05:18:02.195919    9481 logs.go:123] Gathering logs for coredns [e9ce8489fdc0] ...
	I1028 05:18:02.195929    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ce8489fdc0"
	I1028 05:18:02.207802    9481 logs.go:123] Gathering logs for coredns [a96a7852c393] ...
	I1028 05:18:02.207815    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a96a7852c393"
	I1028 05:18:02.219622    9481 logs.go:123] Gathering logs for coredns [bc4f56bb0d90] ...
	I1028 05:18:02.219636    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc4f56bb0d90"
	I1028 05:18:02.231989    9481 logs.go:123] Gathering logs for kube-proxy [0945f87d2543] ...
	I1028 05:18:02.232001    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0945f87d2543"
	I1028 05:18:02.243652    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:18:02.243663    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:18:02.256006    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:18:02.256019    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:18:02.292346    9481 logs.go:123] Gathering logs for kube-scheduler [704c253f9667] ...
	I1028 05:18:02.292353    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704c253f9667"
	I1028 05:18:02.307482    9481 logs.go:123] Gathering logs for etcd [e620bd415877] ...
	I1028 05:18:02.307493    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e620bd415877"
	I1028 05:18:02.321786    9481 logs.go:123] Gathering logs for coredns [8dc59ee2997a] ...
	I1028 05:18:02.321799    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc59ee2997a"
	I1028 05:18:02.333815    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:18:02.333827    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:18:02.369045    9481 logs.go:123] Gathering logs for kube-controller-manager [0ab7a91f70bd] ...
	I1028 05:18:02.369058    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab7a91f70bd"
	I1028 05:18:02.386880    9481 logs.go:123] Gathering logs for storage-provisioner [e3bf74c67489] ...
	I1028 05:18:02.386890    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3bf74c67489"
	I1028 05:18:02.399115    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:18:02.399128    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:18:02.422494    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:18:02.422501    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:18:04.927332    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:18:09.929596    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:18:09.930107    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:18:09.975640    9481 logs.go:282] 1 containers: [220702e4be0e]
	I1028 05:18:09.975801    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:18:09.995983    9481 logs.go:282] 1 containers: [e620bd415877]
	I1028 05:18:09.996081    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:18:10.010076    9481 logs.go:282] 4 containers: [e9ce8489fdc0 8dc59ee2997a a96a7852c393 bc4f56bb0d90]
	I1028 05:18:10.010166    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:18:10.022178    9481 logs.go:282] 1 containers: [704c253f9667]
	I1028 05:18:10.022255    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:18:10.037994    9481 logs.go:282] 1 containers: [0945f87d2543]
	I1028 05:18:10.038072    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:18:10.048621    9481 logs.go:282] 1 containers: [0ab7a91f70bd]
	I1028 05:18:10.048691    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:18:10.059159    9481 logs.go:282] 0 containers: []
	W1028 05:18:10.059169    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:18:10.059233    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:18:10.069560    9481 logs.go:282] 1 containers: [e3bf74c67489]
	I1028 05:18:10.069577    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:18:10.069582    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:18:10.105450    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:18:10.105458    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:18:10.109408    9481 logs.go:123] Gathering logs for etcd [e620bd415877] ...
	I1028 05:18:10.109414    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e620bd415877"
	I1028 05:18:10.123330    9481 logs.go:123] Gathering logs for coredns [8dc59ee2997a] ...
	I1028 05:18:10.123343    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc59ee2997a"
	I1028 05:18:10.135667    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:18:10.135681    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:18:10.150934    9481 logs.go:123] Gathering logs for coredns [bc4f56bb0d90] ...
	I1028 05:18:10.150948    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc4f56bb0d90"
	I1028 05:18:10.162721    9481 logs.go:123] Gathering logs for kube-scheduler [704c253f9667] ...
	I1028 05:18:10.162736    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704c253f9667"
	I1028 05:18:10.178536    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:18:10.178548    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:18:10.202137    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:18:10.202167    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:18:10.253880    9481 logs.go:123] Gathering logs for kube-apiserver [220702e4be0e] ...
	I1028 05:18:10.253893    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 220702e4be0e"
	I1028 05:18:10.267973    9481 logs.go:123] Gathering logs for coredns [a96a7852c393] ...
	I1028 05:18:10.267985    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a96a7852c393"
	I1028 05:18:10.279647    9481 logs.go:123] Gathering logs for kube-controller-manager [0ab7a91f70bd] ...
	I1028 05:18:10.279658    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab7a91f70bd"
	I1028 05:18:10.298250    9481 logs.go:123] Gathering logs for coredns [e9ce8489fdc0] ...
	I1028 05:18:10.298261    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ce8489fdc0"
	I1028 05:18:10.312304    9481 logs.go:123] Gathering logs for kube-proxy [0945f87d2543] ...
	I1028 05:18:10.312318    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0945f87d2543"
	I1028 05:18:10.325337    9481 logs.go:123] Gathering logs for storage-provisioner [e3bf74c67489] ...
	I1028 05:18:10.325350    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3bf74c67489"
	I1028 05:18:12.839699    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:18:17.842031    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:18:17.842107    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:18:17.854031    9481 logs.go:282] 1 containers: [220702e4be0e]
	I1028 05:18:17.854109    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:18:17.866737    9481 logs.go:282] 1 containers: [e620bd415877]
	I1028 05:18:17.866809    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:18:17.878215    9481 logs.go:282] 4 containers: [e9ce8489fdc0 8dc59ee2997a a96a7852c393 bc4f56bb0d90]
	I1028 05:18:17.878282    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:18:17.888862    9481 logs.go:282] 1 containers: [704c253f9667]
	I1028 05:18:17.888935    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:18:17.900759    9481 logs.go:282] 1 containers: [0945f87d2543]
	I1028 05:18:17.900845    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:18:17.912891    9481 logs.go:282] 1 containers: [0ab7a91f70bd]
	I1028 05:18:17.912953    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:18:17.924401    9481 logs.go:282] 0 containers: []
	W1028 05:18:17.924411    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:18:17.924463    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:18:17.935214    9481 logs.go:282] 1 containers: [e3bf74c67489]
	I1028 05:18:17.935233    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:18:17.935238    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:18:17.948665    9481 logs.go:123] Gathering logs for coredns [e9ce8489fdc0] ...
	I1028 05:18:17.948677    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ce8489fdc0"
	I1028 05:18:17.962219    9481 logs.go:123] Gathering logs for kube-proxy [0945f87d2543] ...
	I1028 05:18:17.962235    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0945f87d2543"
	I1028 05:18:17.974918    9481 logs.go:123] Gathering logs for coredns [8dc59ee2997a] ...
	I1028 05:18:17.974925    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc59ee2997a"
	I1028 05:18:17.987550    9481 logs.go:123] Gathering logs for coredns [a96a7852c393] ...
	I1028 05:18:17.987563    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a96a7852c393"
	I1028 05:18:18.000645    9481 logs.go:123] Gathering logs for storage-provisioner [e3bf74c67489] ...
	I1028 05:18:18.000656    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3bf74c67489"
	I1028 05:18:18.014002    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:18:18.014011    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:18:18.050679    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:18:18.050698    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:18:18.087067    9481 logs.go:123] Gathering logs for kube-scheduler [704c253f9667] ...
	I1028 05:18:18.087079    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704c253f9667"
	I1028 05:18:18.103490    9481 logs.go:123] Gathering logs for etcd [e620bd415877] ...
	I1028 05:18:18.103502    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e620bd415877"
	I1028 05:18:18.118740    9481 logs.go:123] Gathering logs for coredns [bc4f56bb0d90] ...
	I1028 05:18:18.118751    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc4f56bb0d90"
	I1028 05:18:18.130873    9481 logs.go:123] Gathering logs for kube-controller-manager [0ab7a91f70bd] ...
	I1028 05:18:18.130881    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab7a91f70bd"
	I1028 05:18:18.148961    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:18:18.148973    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:18:18.175411    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:18:18.175420    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:18:18.180096    9481 logs.go:123] Gathering logs for kube-apiserver [220702e4be0e] ...
	I1028 05:18:18.180103    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 220702e4be0e"
	I1028 05:18:20.696554    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:18:25.699152    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:18:25.699715    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 05:18:25.737126    9481 logs.go:282] 1 containers: [220702e4be0e]
	I1028 05:18:25.737274    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 05:18:25.758417    9481 logs.go:282] 1 containers: [e620bd415877]
	I1028 05:18:25.758560    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 05:18:25.773763    9481 logs.go:282] 4 containers: [e9ce8489fdc0 8dc59ee2997a a96a7852c393 bc4f56bb0d90]
	I1028 05:18:25.773859    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 05:18:25.786121    9481 logs.go:282] 1 containers: [704c253f9667]
	I1028 05:18:25.786199    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 05:18:25.797954    9481 logs.go:282] 1 containers: [0945f87d2543]
	I1028 05:18:25.798021    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 05:18:25.808450    9481 logs.go:282] 1 containers: [0ab7a91f70bd]
	I1028 05:18:25.808512    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 05:18:25.823148    9481 logs.go:282] 0 containers: []
	W1028 05:18:25.823163    9481 logs.go:284] No container was found matching "kindnet"
	I1028 05:18:25.823229    9481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 05:18:25.833860    9481 logs.go:282] 1 containers: [e3bf74c67489]
	I1028 05:18:25.833880    9481 logs.go:123] Gathering logs for etcd [e620bd415877] ...
	I1028 05:18:25.833885    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e620bd415877"
	I1028 05:18:25.847848    9481 logs.go:123] Gathering logs for coredns [bc4f56bb0d90] ...
	I1028 05:18:25.847860    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc4f56bb0d90"
	I1028 05:18:25.859631    9481 logs.go:123] Gathering logs for kube-controller-manager [0ab7a91f70bd] ...
	I1028 05:18:25.859644    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab7a91f70bd"
	I1028 05:18:25.876923    9481 logs.go:123] Gathering logs for Docker ...
	I1028 05:18:25.876935    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 05:18:25.899907    9481 logs.go:123] Gathering logs for dmesg ...
	I1028 05:18:25.899915    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 05:18:25.904537    9481 logs.go:123] Gathering logs for kube-proxy [0945f87d2543] ...
	I1028 05:18:25.904545    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0945f87d2543"
	I1028 05:18:25.916999    9481 logs.go:123] Gathering logs for container status ...
	I1028 05:18:25.917010    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 05:18:25.929321    9481 logs.go:123] Gathering logs for kube-scheduler [704c253f9667] ...
	I1028 05:18:25.929334    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704c253f9667"
	I1028 05:18:25.954258    9481 logs.go:123] Gathering logs for storage-provisioner [e3bf74c67489] ...
	I1028 05:18:25.954269    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3bf74c67489"
	I1028 05:18:25.965861    9481 logs.go:123] Gathering logs for kubelet ...
	I1028 05:18:25.965870    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 05:18:26.000291    9481 logs.go:123] Gathering logs for describe nodes ...
	I1028 05:18:26.000302    9481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 05:18:26.033929    9481 logs.go:123] Gathering logs for kube-apiserver [220702e4be0e] ...
	I1028 05:18:26.033939    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 220702e4be0e"
	I1028 05:18:26.048614    9481 logs.go:123] Gathering logs for coredns [8dc59ee2997a] ...
	I1028 05:18:26.048626    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc59ee2997a"
	I1028 05:18:26.061810    9481 logs.go:123] Gathering logs for coredns [e9ce8489fdc0] ...
	I1028 05:18:26.061822    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ce8489fdc0"
	I1028 05:18:26.074488    9481 logs.go:123] Gathering logs for coredns [a96a7852c393] ...
	I1028 05:18:26.074500    9481 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a96a7852c393"
	I1028 05:18:28.592675    9481 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1028 05:18:33.594930    9481 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1028 05:18:33.599039    9481 out.go:201] 
	W1028 05:18:33.603055    9481 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1028 05:18:33.603081    9481 out.go:270] * 
	* 
	W1028 05:18:33.605517    9481 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:18:33.620950    9481 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-451000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (572.70s)

                                                
                                    
x
+
TestPause/serial/Start (10.05s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-585000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-585000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.977290834s)

                                                
                                                
-- stdout --
	* [pause-585000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-585000" primary control-plane node in "pause-585000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-585000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-585000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-585000 -n pause-585000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-585000 -n pause-585000: exit status 7 (72.872916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-585000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-489000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-489000 --driver=qemu2 : exit status 80 (9.963081709s)

                                                
                                                
-- stdout --
	* [NoKubernetes-489000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-489000" primary control-plane node in "NoKubernetes-489000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-489000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-489000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-489000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-489000 -n NoKubernetes-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-489000 -n NoKubernetes-489000: exit status 7 (59.552917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-489000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-489000 --no-kubernetes --driver=qemu2 : exit status 80 (5.258639667s)

                                                
                                                
-- stdout --
	* [NoKubernetes-489000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-489000
	* Restarting existing qemu2 VM for "NoKubernetes-489000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-489000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-489000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-489000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-489000 -n NoKubernetes-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-489000 -n NoKubernetes-489000: exit status 7 (67.958583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-489000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-489000 --no-kubernetes --driver=qemu2 : exit status 80 (5.247312875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-489000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-489000
	* Restarting existing qemu2 VM for "NoKubernetes-489000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-489000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-489000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-489000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-489000 -n NoKubernetes-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-489000 -n NoKubernetes-489000: exit status 7 (68.990041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-489000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-489000 --driver=qemu2 : exit status 80 (5.270167125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-489000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-489000
	* Restarting existing qemu2 VM for "NoKubernetes-489000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-489000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-489000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-489000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-489000 -n NoKubernetes-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-489000 -n NoKubernetes-489000: exit status 7 (70.522209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-181000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-181000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.768200583s)

                                                
                                                
-- stdout --
	* [auto-181000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-181000" primary control-plane node in "auto-181000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-181000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:16:40.481487    9668 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:16:40.481632    9668 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:16:40.481636    9668 out.go:358] Setting ErrFile to fd 2...
	I1028 05:16:40.481638    9668 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:16:40.481770    9668 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:16:40.482990    9668 out.go:352] Setting JSON to false
	I1028 05:16:40.501143    9668 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6371,"bootTime":1730111429,"procs":519,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:16:40.501231    9668 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:16:40.507724    9668 out.go:177] * [auto-181000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:16:40.515709    9668 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:16:40.515773    9668 notify.go:220] Checking for updates...
	I1028 05:16:40.521619    9668 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:16:40.524603    9668 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:16:40.525712    9668 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:16:40.528604    9668 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:16:40.531612    9668 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:16:40.535042    9668 config.go:182] Loaded profile config "multinode-268000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:16:40.535115    9668 config.go:182] Loaded profile config "stopped-upgrade-451000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 05:16:40.535155    9668 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:16:40.539567    9668 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 05:16:40.546655    9668 start.go:297] selected driver: qemu2
	I1028 05:16:40.546663    9668 start.go:901] validating driver "qemu2" against <nil>
	I1028 05:16:40.546670    9668 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:16:40.549110    9668 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 05:16:40.551611    9668 out.go:177] * Automatically selected the socket_vmnet network
	I1028 05:16:40.554720    9668 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 05:16:40.554737    9668 cni.go:84] Creating CNI manager for ""
	I1028 05:16:40.554756    9668 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 05:16:40.554762    9668 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 05:16:40.554788    9668 start.go:340] cluster config:
	{Name:auto-181000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:auto-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:16:40.559067    9668 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:16:40.567596    9668 out.go:177] * Starting "auto-181000" primary control-plane node in "auto-181000" cluster
	I1028 05:16:40.571657    9668 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:16:40.571673    9668 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 05:16:40.571680    9668 cache.go:56] Caching tarball of preloaded images
	I1028 05:16:40.571759    9668 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:16:40.571765    9668 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 05:16:40.571821    9668 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/auto-181000/config.json ...
	I1028 05:16:40.571830    9668 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/auto-181000/config.json: {Name:mkcd393be95a2dfb55682dc78abcf6da6d829d8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:16:40.572066    9668 start.go:360] acquireMachinesLock for auto-181000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:16:40.572109    9668 start.go:364] duration metric: took 36.709µs to acquireMachinesLock for "auto-181000"
	I1028 05:16:40.572121    9668 start.go:93] Provisioning new machine with config: &{Name:auto-181000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:auto-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:16:40.572144    9668 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:16:40.579636    9668 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 05:16:40.594617    9668 start.go:159] libmachine.API.Create for "auto-181000" (driver="qemu2")
	I1028 05:16:40.594647    9668 client.go:168] LocalClient.Create starting
	I1028 05:16:40.594717    9668 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:16:40.594759    9668 main.go:141] libmachine: Decoding PEM data...
	I1028 05:16:40.594770    9668 main.go:141] libmachine: Parsing certificate...
	I1028 05:16:40.594811    9668 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:16:40.594840    9668 main.go:141] libmachine: Decoding PEM data...
	I1028 05:16:40.594848    9668 main.go:141] libmachine: Parsing certificate...
	I1028 05:16:40.595249    9668 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:16:40.751820    9668 main.go:141] libmachine: Creating SSH key...
	I1028 05:16:40.793499    9668 main.go:141] libmachine: Creating Disk image...
	I1028 05:16:40.793504    9668 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:16:40.793692    9668 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/auto-181000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/auto-181000/disk.qcow2
	I1028 05:16:40.803633    9668 main.go:141] libmachine: STDOUT: 
	I1028 05:16:40.803655    9668 main.go:141] libmachine: STDERR: 
	I1028 05:16:40.803710    9668 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/auto-181000/disk.qcow2 +20000M
	I1028 05:16:40.812313    9668 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:16:40.812328    9668 main.go:141] libmachine: STDERR: 
	I1028 05:16:40.812342    9668 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/auto-181000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/auto-181000/disk.qcow2
	I1028 05:16:40.812348    9668 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:16:40.812362    9668 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:16:40.812388    9668 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/auto-181000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/auto-181000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/auto-181000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:eb:3f:48:f5:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/auto-181000/disk.qcow2
	I1028 05:16:40.814179    9668 main.go:141] libmachine: STDOUT: 
	I1028 05:16:40.814211    9668 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:16:40.814239    9668 client.go:171] duration metric: took 219.5885ms to LocalClient.Create
	I1028 05:16:42.816404    9668 start.go:128] duration metric: took 2.244278917s to createHost
	I1028 05:16:42.816520    9668 start.go:83] releasing machines lock for "auto-181000", held for 2.244448792s
	W1028 05:16:42.816614    9668 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:16:42.832063    9668 out.go:177] * Deleting "auto-181000" in qemu2 ...
	W1028 05:16:42.862002    9668 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:16:42.862033    9668 start.go:729] Will try again in 5 seconds ...
	I1028 05:16:47.864180    9668 start.go:360] acquireMachinesLock for auto-181000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:16:47.864779    9668 start.go:364] duration metric: took 483.375µs to acquireMachinesLock for "auto-181000"
	I1028 05:16:47.864852    9668 start.go:93] Provisioning new machine with config: &{Name:auto-181000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:auto-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:16:47.865077    9668 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:16:47.875753    9668 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 05:16:47.925639    9668 start.go:159] libmachine.API.Create for "auto-181000" (driver="qemu2")
	I1028 05:16:47.925703    9668 client.go:168] LocalClient.Create starting
	I1028 05:16:47.925845    9668 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:16:47.925929    9668 main.go:141] libmachine: Decoding PEM data...
	I1028 05:16:47.925946    9668 main.go:141] libmachine: Parsing certificate...
	I1028 05:16:47.926032    9668 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:16:47.926089    9668 main.go:141] libmachine: Decoding PEM data...
	I1028 05:16:47.926106    9668 main.go:141] libmachine: Parsing certificate...
	I1028 05:16:47.926893    9668 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:16:48.093444    9668 main.go:141] libmachine: Creating SSH key...
	I1028 05:16:48.150273    9668 main.go:141] libmachine: Creating Disk image...
	I1028 05:16:48.150280    9668 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:16:48.150472    9668 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/auto-181000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/auto-181000/disk.qcow2
	I1028 05:16:48.160722    9668 main.go:141] libmachine: STDOUT: 
	I1028 05:16:48.160752    9668 main.go:141] libmachine: STDERR: 
	I1028 05:16:48.160823    9668 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/auto-181000/disk.qcow2 +20000M
	I1028 05:16:48.169487    9668 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:16:48.169506    9668 main.go:141] libmachine: STDERR: 
	I1028 05:16:48.169519    9668 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/auto-181000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/auto-181000/disk.qcow2
	I1028 05:16:48.169523    9668 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:16:48.169531    9668 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:16:48.169563    9668 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/auto-181000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/auto-181000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/auto-181000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:d3:01:e4:ea:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/auto-181000/disk.qcow2
	I1028 05:16:48.171402    9668 main.go:141] libmachine: STDOUT: 
	I1028 05:16:48.171419    9668 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:16:48.171433    9668 client.go:171] duration metric: took 245.728625ms to LocalClient.Create
	I1028 05:16:50.173669    9668 start.go:128] duration metric: took 2.308528792s to createHost
	I1028 05:16:50.173746    9668 start.go:83] releasing machines lock for "auto-181000", held for 2.308994292s
	W1028 05:16:50.174058    9668 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-181000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-181000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:16:50.183721    9668 out.go:201] 
	W1028 05:16:50.189847    9668 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:16:50.189873    9668 out.go:270] * 
	* 
	W1028 05:16:50.192766    9668 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:16:50.202743    9668 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (10.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-181000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-181000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (10.099820625s)

                                                
                                                
-- stdout --
	* [kindnet-181000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-181000" primary control-plane node in "kindnet-181000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-181000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:16:52.652303    9777 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:16:52.652460    9777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:16:52.652463    9777 out.go:358] Setting ErrFile to fd 2...
	I1028 05:16:52.652466    9777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:16:52.652600    9777 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:16:52.653844    9777 out.go:352] Setting JSON to false
	I1028 05:16:52.672044    9777 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6383,"bootTime":1730111429,"procs":519,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:16:52.672114    9777 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:16:52.677667    9777 out.go:177] * [kindnet-181000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:16:52.684602    9777 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:16:52.684652    9777 notify.go:220] Checking for updates...
	I1028 05:16:52.690567    9777 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:16:52.693678    9777 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:16:52.696647    9777 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:16:52.699632    9777 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:16:52.702619    9777 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:16:52.705961    9777 config.go:182] Loaded profile config "multinode-268000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:16:52.706030    9777 config.go:182] Loaded profile config "stopped-upgrade-451000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 05:16:52.706078    9777 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:16:52.710552    9777 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 05:16:52.717663    9777 start.go:297] selected driver: qemu2
	I1028 05:16:52.717670    9777 start.go:901] validating driver "qemu2" against <nil>
	I1028 05:16:52.717678    9777 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:16:52.720192    9777 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 05:16:52.723586    9777 out.go:177] * Automatically selected the socket_vmnet network
	I1028 05:16:52.726699    9777 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 05:16:52.726725    9777 cni.go:84] Creating CNI manager for "kindnet"
	I1028 05:16:52.726729    9777 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 05:16:52.726767    9777 start.go:340] cluster config:
	{Name:kindnet-181000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kindnet-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:16:52.731439    9777 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:16:52.739620    9777 out.go:177] * Starting "kindnet-181000" primary control-plane node in "kindnet-181000" cluster
	I1028 05:16:52.743607    9777 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:16:52.743629    9777 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 05:16:52.743639    9777 cache.go:56] Caching tarball of preloaded images
	I1028 05:16:52.743715    9777 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:16:52.743721    9777 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 05:16:52.743774    9777 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/kindnet-181000/config.json ...
	I1028 05:16:52.743785    9777 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/kindnet-181000/config.json: {Name:mkfd9f22bff18cdbf82ee701995201f198ad98e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:16:52.744160    9777 start.go:360] acquireMachinesLock for kindnet-181000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:16:52.744208    9777 start.go:364] duration metric: took 42.208µs to acquireMachinesLock for "kindnet-181000"
	I1028 05:16:52.744219    9777 start.go:93] Provisioning new machine with config: &{Name:kindnet-181000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kindnet-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:16:52.744249    9777 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:16:52.752620    9777 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 05:16:52.769119    9777 start.go:159] libmachine.API.Create for "kindnet-181000" (driver="qemu2")
	I1028 05:16:52.769146    9777 client.go:168] LocalClient.Create starting
	I1028 05:16:52.769216    9777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:16:52.769251    9777 main.go:141] libmachine: Decoding PEM data...
	I1028 05:16:52.769260    9777 main.go:141] libmachine: Parsing certificate...
	I1028 05:16:52.769303    9777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:16:52.769331    9777 main.go:141] libmachine: Decoding PEM data...
	I1028 05:16:52.769341    9777 main.go:141] libmachine: Parsing certificate...
	I1028 05:16:52.769767    9777 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:16:52.928483    9777 main.go:141] libmachine: Creating SSH key...
	I1028 05:16:53.065335    9777 main.go:141] libmachine: Creating Disk image...
	I1028 05:16:53.065343    9777 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:16:53.065544    9777 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kindnet-181000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kindnet-181000/disk.qcow2
	I1028 05:16:53.075676    9777 main.go:141] libmachine: STDOUT: 
	I1028 05:16:53.075696    9777 main.go:141] libmachine: STDERR: 
	I1028 05:16:53.075763    9777 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kindnet-181000/disk.qcow2 +20000M
	I1028 05:16:53.084252    9777 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:16:53.084267    9777 main.go:141] libmachine: STDERR: 
	I1028 05:16:53.084284    9777 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kindnet-181000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kindnet-181000/disk.qcow2
	I1028 05:16:53.084290    9777 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:16:53.084301    9777 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:16:53.084334    9777 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kindnet-181000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kindnet-181000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kindnet-181000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:20:cc:b4:44:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kindnet-181000/disk.qcow2
	I1028 05:16:53.086166    9777 main.go:141] libmachine: STDOUT: 
	I1028 05:16:53.086183    9777 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:16:53.086205    9777 client.go:171] duration metric: took 317.059417ms to LocalClient.Create
	I1028 05:16:55.088394    9777 start.go:128] duration metric: took 2.344165458s to createHost
	I1028 05:16:55.088473    9777 start.go:83] releasing machines lock for "kindnet-181000", held for 2.344306959s
	W1028 05:16:55.088526    9777 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:16:55.098684    9777 out.go:177] * Deleting "kindnet-181000" in qemu2 ...
	W1028 05:16:55.131736    9777 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:16:55.131784    9777 start.go:729] Will try again in 5 seconds ...
	I1028 05:17:00.133957    9777 start.go:360] acquireMachinesLock for kindnet-181000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:17:00.134709    9777 start.go:364] duration metric: took 632.083µs to acquireMachinesLock for "kindnet-181000"
	I1028 05:17:00.134858    9777 start.go:93] Provisioning new machine with config: &{Name:kindnet-181000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kindnet-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:17:00.135158    9777 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:17:00.145762    9777 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 05:17:00.194283    9777 start.go:159] libmachine.API.Create for "kindnet-181000" (driver="qemu2")
	I1028 05:17:00.194343    9777 client.go:168] LocalClient.Create starting
	I1028 05:17:00.194514    9777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:17:00.194616    9777 main.go:141] libmachine: Decoding PEM data...
	I1028 05:17:00.194631    9777 main.go:141] libmachine: Parsing certificate...
	I1028 05:17:00.194693    9777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:17:00.194751    9777 main.go:141] libmachine: Decoding PEM data...
	I1028 05:17:00.194769    9777 main.go:141] libmachine: Parsing certificate...
	I1028 05:17:00.195518    9777 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:17:00.363863    9777 main.go:141] libmachine: Creating SSH key...
	I1028 05:17:00.658921    9777 main.go:141] libmachine: Creating Disk image...
	I1028 05:17:00.658935    9777 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:17:00.659141    9777 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kindnet-181000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kindnet-181000/disk.qcow2
	I1028 05:17:00.669960    9777 main.go:141] libmachine: STDOUT: 
	I1028 05:17:00.669983    9777 main.go:141] libmachine: STDERR: 
	I1028 05:17:00.670061    9777 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kindnet-181000/disk.qcow2 +20000M
	I1028 05:17:00.679129    9777 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:17:00.679145    9777 main.go:141] libmachine: STDERR: 
	I1028 05:17:00.679161    9777 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kindnet-181000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kindnet-181000/disk.qcow2
	I1028 05:17:00.679166    9777 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:17:00.679177    9777 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:17:00.679219    9777 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kindnet-181000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kindnet-181000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kindnet-181000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:9f:58:e7:06:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kindnet-181000/disk.qcow2
	I1028 05:17:00.681213    9777 main.go:141] libmachine: STDOUT: 
	I1028 05:17:00.681228    9777 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:17:00.681244    9777 client.go:171] duration metric: took 486.9055ms to LocalClient.Create
	I1028 05:17:02.683343    9777 start.go:128] duration metric: took 2.548198167s to createHost
	I1028 05:17:02.683382    9777 start.go:83] releasing machines lock for "kindnet-181000", held for 2.548708375s
	W1028 05:17:02.683525    9777 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-181000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-181000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:17:02.691461    9777 out.go:201] 
	W1028 05:17:02.697531    9777 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:17:02.697556    9777 out.go:270] * 
	* 
	W1028 05:17:02.698065    9777 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:17:02.710418    9777 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (10.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-181000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-181000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.772069958s)

                                                
                                                
-- stdout --
	* [calico-181000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-181000" primary control-plane node in "calico-181000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-181000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:17:05.139551    9894 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:17:05.139701    9894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:17:05.139704    9894 out.go:358] Setting ErrFile to fd 2...
	I1028 05:17:05.139706    9894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:17:05.139837    9894 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:17:05.141057    9894 out.go:352] Setting JSON to false
	I1028 05:17:05.159257    9894 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6396,"bootTime":1730111429,"procs":519,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:17:05.159330    9894 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:17:05.167728    9894 out.go:177] * [calico-181000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:17:05.174761    9894 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:17:05.174849    9894 notify.go:220] Checking for updates...
	I1028 05:17:05.181747    9894 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:17:05.184766    9894 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:17:05.187747    9894 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:17:05.190757    9894 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:17:05.193723    9894 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:17:05.197132    9894 config.go:182] Loaded profile config "multinode-268000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:17:05.197207    9894 config.go:182] Loaded profile config "stopped-upgrade-451000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 05:17:05.197261    9894 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:17:05.200631    9894 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 05:17:05.209707    9894 start.go:297] selected driver: qemu2
	I1028 05:17:05.209714    9894 start.go:901] validating driver "qemu2" against <nil>
	I1028 05:17:05.209721    9894 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:17:05.212163    9894 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 05:17:05.216734    9894 out.go:177] * Automatically selected the socket_vmnet network
	I1028 05:17:05.219837    9894 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 05:17:05.219854    9894 cni.go:84] Creating CNI manager for "calico"
	I1028 05:17:05.219860    9894 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1028 05:17:05.219897    9894 start.go:340] cluster config:
	{Name:calico-181000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:calico-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:17:05.224171    9894 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:17:05.232716    9894 out.go:177] * Starting "calico-181000" primary control-plane node in "calico-181000" cluster
	I1028 05:17:05.236704    9894 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:17:05.236727    9894 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 05:17:05.236736    9894 cache.go:56] Caching tarball of preloaded images
	I1028 05:17:05.236809    9894 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:17:05.236814    9894 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 05:17:05.236864    9894 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/calico-181000/config.json ...
	I1028 05:17:05.236874    9894 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/calico-181000/config.json: {Name:mk4662b4c501f536cedd7022e7acf4f1d578f555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:17:05.237128    9894 start.go:360] acquireMachinesLock for calico-181000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:17:05.237173    9894 start.go:364] duration metric: took 39.583µs to acquireMachinesLock for "calico-181000"
	I1028 05:17:05.237184    9894 start.go:93] Provisioning new machine with config: &{Name:calico-181000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:calico-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:17:05.237217    9894 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:17:05.241721    9894 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 05:17:05.257264    9894 start.go:159] libmachine.API.Create for "calico-181000" (driver="qemu2")
	I1028 05:17:05.257297    9894 client.go:168] LocalClient.Create starting
	I1028 05:17:05.257367    9894 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:17:05.257403    9894 main.go:141] libmachine: Decoding PEM data...
	I1028 05:17:05.257412    9894 main.go:141] libmachine: Parsing certificate...
	I1028 05:17:05.257446    9894 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:17:05.257475    9894 main.go:141] libmachine: Decoding PEM data...
	I1028 05:17:05.257484    9894 main.go:141] libmachine: Parsing certificate...
	I1028 05:17:05.257866    9894 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:17:05.415745    9894 main.go:141] libmachine: Creating SSH key...
	I1028 05:17:05.466554    9894 main.go:141] libmachine: Creating Disk image...
	I1028 05:17:05.466560    9894 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:17:05.466753    9894 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/calico-181000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/calico-181000/disk.qcow2
	I1028 05:17:05.476690    9894 main.go:141] libmachine: STDOUT: 
	I1028 05:17:05.476712    9894 main.go:141] libmachine: STDERR: 
	I1028 05:17:05.476773    9894 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/calico-181000/disk.qcow2 +20000M
	I1028 05:17:05.485488    9894 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:17:05.485505    9894 main.go:141] libmachine: STDERR: 
	I1028 05:17:05.485520    9894 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/calico-181000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/calico-181000/disk.qcow2
	I1028 05:17:05.485527    9894 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:17:05.485546    9894 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:17:05.485579    9894 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/calico-181000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/calico-181000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/calico-181000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:65:55:b3:4d:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/calico-181000/disk.qcow2
	I1028 05:17:05.487367    9894 main.go:141] libmachine: STDOUT: 
	I1028 05:17:05.487381    9894 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:17:05.487400    9894 client.go:171] duration metric: took 230.099334ms to LocalClient.Create
	I1028 05:17:07.489434    9894 start.go:128] duration metric: took 2.252257583s to createHost
	I1028 05:17:07.489469    9894 start.go:83] releasing machines lock for "calico-181000", held for 2.252340833s
	W1028 05:17:07.489486    9894 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:17:07.505373    9894 out.go:177] * Deleting "calico-181000" in qemu2 ...
	W1028 05:17:07.517428    9894 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:17:07.517438    9894 start.go:729] Will try again in 5 seconds ...
	I1028 05:17:12.519629    9894 start.go:360] acquireMachinesLock for calico-181000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:17:12.520354    9894 start.go:364] duration metric: took 554.958µs to acquireMachinesLock for "calico-181000"
	I1028 05:17:12.520433    9894 start.go:93] Provisioning new machine with config: &{Name:calico-181000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:calico-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:17:12.520649    9894 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:17:12.531320    9894 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 05:17:12.576541    9894 start.go:159] libmachine.API.Create for "calico-181000" (driver="qemu2")
	I1028 05:17:12.576607    9894 client.go:168] LocalClient.Create starting
	I1028 05:17:12.576756    9894 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:17:12.576855    9894 main.go:141] libmachine: Decoding PEM data...
	I1028 05:17:12.576874    9894 main.go:141] libmachine: Parsing certificate...
	I1028 05:17:12.576947    9894 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:17:12.577005    9894 main.go:141] libmachine: Decoding PEM data...
	I1028 05:17:12.577022    9894 main.go:141] libmachine: Parsing certificate...
	I1028 05:17:12.577838    9894 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:17:12.743187    9894 main.go:141] libmachine: Creating SSH key...
	I1028 05:17:12.820087    9894 main.go:141] libmachine: Creating Disk image...
	I1028 05:17:12.820097    9894 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:17:12.820305    9894 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/calico-181000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/calico-181000/disk.qcow2
	I1028 05:17:12.830731    9894 main.go:141] libmachine: STDOUT: 
	I1028 05:17:12.830755    9894 main.go:141] libmachine: STDERR: 
	I1028 05:17:12.830824    9894 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/calico-181000/disk.qcow2 +20000M
	I1028 05:17:12.839460    9894 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:17:12.839484    9894 main.go:141] libmachine: STDERR: 
	I1028 05:17:12.839497    9894 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/calico-181000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/calico-181000/disk.qcow2
	I1028 05:17:12.839502    9894 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:17:12.839510    9894 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:17:12.839544    9894 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/calico-181000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/calico-181000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/calico-181000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:6a:5d:f9:07:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/calico-181000/disk.qcow2
	I1028 05:17:12.841430    9894 main.go:141] libmachine: STDOUT: 
	I1028 05:17:12.841444    9894 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:17:12.841456    9894 client.go:171] duration metric: took 264.849084ms to LocalClient.Create
	I1028 05:17:14.843661    9894 start.go:128] duration metric: took 2.323031833s to createHost
	I1028 05:17:14.843723    9894 start.go:83] releasing machines lock for "calico-181000", held for 2.32339325s
	W1028 05:17:14.844033    9894 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-181000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-181000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:17:14.853572    9894 out.go:201] 
	W1028 05:17:14.858595    9894 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:17:14.858638    9894 out.go:270] * 
	* 
	W1028 05:17:14.860006    9894 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:17:14.870550    9894 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-181000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-181000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.933122667s)

                                                
                                                
-- stdout --
	* [custom-flannel-181000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-181000" primary control-plane node in "custom-flannel-181000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-181000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:17:17.458891   10012 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:17:17.459050   10012 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:17:17.459054   10012 out.go:358] Setting ErrFile to fd 2...
	I1028 05:17:17.459056   10012 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:17:17.459183   10012 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:17:17.460376   10012 out.go:352] Setting JSON to false
	I1028 05:17:17.478522   10012 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6408,"bootTime":1730111429,"procs":519,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:17:17.478604   10012 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:17:17.484696   10012 out.go:177] * [custom-flannel-181000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:17:17.492547   10012 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:17:17.492608   10012 notify.go:220] Checking for updates...
	I1028 05:17:17.498634   10012 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:17:17.500094   10012 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:17:17.502675   10012 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:17:17.505703   10012 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:17:17.508680   10012 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:17:17.512072   10012 config.go:182] Loaded profile config "multinode-268000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:17:17.512148   10012 config.go:182] Loaded profile config "stopped-upgrade-451000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 05:17:17.512190   10012 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:17:17.516708   10012 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 05:17:17.523680   10012 start.go:297] selected driver: qemu2
	I1028 05:17:17.523687   10012 start.go:901] validating driver "qemu2" against <nil>
	I1028 05:17:17.523694   10012 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:17:17.526092   10012 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 05:17:17.529679   10012 out.go:177] * Automatically selected the socket_vmnet network
	I1028 05:17:17.532764   10012 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 05:17:17.532782   10012 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1028 05:17:17.532789   10012 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1028 05:17:17.532823   10012 start.go:340] cluster config:
	{Name:custom-flannel-181000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:17:17.537143   10012 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:17:17.545715   10012 out.go:177] * Starting "custom-flannel-181000" primary control-plane node in "custom-flannel-181000" cluster
	I1028 05:17:17.549609   10012 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:17:17.549626   10012 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 05:17:17.549638   10012 cache.go:56] Caching tarball of preloaded images
	I1028 05:17:17.549708   10012 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:17:17.549713   10012 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 05:17:17.549770   10012 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/custom-flannel-181000/config.json ...
	I1028 05:17:17.549781   10012 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/custom-flannel-181000/config.json: {Name:mk42e0b658ccbce1f3ee5ca553bef4b5df30c93e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:17:17.550009   10012 start.go:360] acquireMachinesLock for custom-flannel-181000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:17:17.550051   10012 start.go:364] duration metric: took 36.625µs to acquireMachinesLock for "custom-flannel-181000"
	I1028 05:17:17.550063   10012 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-181000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:17:17.550096   10012 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:17:17.558663   10012 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 05:17:17.573409   10012 start.go:159] libmachine.API.Create for "custom-flannel-181000" (driver="qemu2")
	I1028 05:17:17.573434   10012 client.go:168] LocalClient.Create starting
	I1028 05:17:17.573503   10012 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:17:17.573541   10012 main.go:141] libmachine: Decoding PEM data...
	I1028 05:17:17.573553   10012 main.go:141] libmachine: Parsing certificate...
	I1028 05:17:17.573590   10012 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:17:17.573621   10012 main.go:141] libmachine: Decoding PEM data...
	I1028 05:17:17.573633   10012 main.go:141] libmachine: Parsing certificate...
	I1028 05:17:17.573997   10012 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:17:17.732594   10012 main.go:141] libmachine: Creating SSH key...
	I1028 05:17:17.863671   10012 main.go:141] libmachine: Creating Disk image...
	I1028 05:17:17.863678   10012 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:17:17.863870   10012 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/custom-flannel-181000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/custom-flannel-181000/disk.qcow2
	I1028 05:17:17.874217   10012 main.go:141] libmachine: STDOUT: 
	I1028 05:17:17.874247   10012 main.go:141] libmachine: STDERR: 
	I1028 05:17:17.874304   10012 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/custom-flannel-181000/disk.qcow2 +20000M
	I1028 05:17:17.882939   10012 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:17:17.882954   10012 main.go:141] libmachine: STDERR: 
	I1028 05:17:17.882970   10012 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/custom-flannel-181000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/custom-flannel-181000/disk.qcow2
	I1028 05:17:17.882976   10012 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:17:17.882988   10012 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:17:17.883023   10012 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/custom-flannel-181000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/custom-flannel-181000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/custom-flannel-181000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:55:33:9e:15:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/custom-flannel-181000/disk.qcow2
	I1028 05:17:17.884834   10012 main.go:141] libmachine: STDOUT: 
	I1028 05:17:17.884861   10012 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:17:17.884883   10012 client.go:171] duration metric: took 311.450334ms to LocalClient.Create
	I1028 05:17:19.887062   10012 start.go:128] duration metric: took 2.336986875s to createHost
	I1028 05:17:19.887155   10012 start.go:83] releasing machines lock for "custom-flannel-181000", held for 2.337144833s
	W1028 05:17:19.887243   10012 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:17:19.897510   10012 out.go:177] * Deleting "custom-flannel-181000" in qemu2 ...
	W1028 05:17:19.924969   10012 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:17:19.924992   10012 start.go:729] Will try again in 5 seconds ...
	I1028 05:17:24.927202   10012 start.go:360] acquireMachinesLock for custom-flannel-181000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:17:24.927861   10012 start.go:364] duration metric: took 545.625µs to acquireMachinesLock for "custom-flannel-181000"
	I1028 05:17:24.928004   10012 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-181000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:17:24.928303   10012 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:17:24.938966   10012 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 05:17:24.988083   10012 start.go:159] libmachine.API.Create for "custom-flannel-181000" (driver="qemu2")
	I1028 05:17:24.988151   10012 client.go:168] LocalClient.Create starting
	I1028 05:17:24.988289   10012 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:17:24.988382   10012 main.go:141] libmachine: Decoding PEM data...
	I1028 05:17:24.988399   10012 main.go:141] libmachine: Parsing certificate...
	I1028 05:17:24.988463   10012 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:17:24.988524   10012 main.go:141] libmachine: Decoding PEM data...
	I1028 05:17:24.988538   10012 main.go:141] libmachine: Parsing certificate...
	I1028 05:17:24.989292   10012 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:17:25.156516   10012 main.go:141] libmachine: Creating SSH key...
	I1028 05:17:25.293228   10012 main.go:141] libmachine: Creating Disk image...
	I1028 05:17:25.293239   10012 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:17:25.293441   10012 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/custom-flannel-181000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/custom-flannel-181000/disk.qcow2
	I1028 05:17:25.303718   10012 main.go:141] libmachine: STDOUT: 
	I1028 05:17:25.303739   10012 main.go:141] libmachine: STDERR: 
	I1028 05:17:25.303799   10012 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/custom-flannel-181000/disk.qcow2 +20000M
	I1028 05:17:25.312506   10012 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:17:25.312523   10012 main.go:141] libmachine: STDERR: 
	I1028 05:17:25.312536   10012 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/custom-flannel-181000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/custom-flannel-181000/disk.qcow2
	I1028 05:17:25.312542   10012 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:17:25.312554   10012 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:17:25.312592   10012 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/custom-flannel-181000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/custom-flannel-181000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/custom-flannel-181000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:0b:30:34:cc:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/custom-flannel-181000/disk.qcow2
	I1028 05:17:25.314494   10012 main.go:141] libmachine: STDOUT: 
	I1028 05:17:25.314516   10012 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:17:25.314530   10012 client.go:171] duration metric: took 326.37925ms to LocalClient.Create
	I1028 05:17:27.316699   10012 start.go:128] duration metric: took 2.388406792s to createHost
	I1028 05:17:27.316781   10012 start.go:83] releasing machines lock for "custom-flannel-181000", held for 2.38894775s
	W1028 05:17:27.317147   10012 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-181000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-181000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:17:27.333891   10012 out.go:201] 
	W1028 05:17:27.337995   10012 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:17:27.338059   10012 out.go:270] * 
	* 
	W1028 05:17:27.340102   10012 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:17:27.351845   10012 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-181000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-181000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.717012125s)

                                                
                                                
-- stdout --
	* [false-181000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-181000" primary control-plane node in "false-181000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-181000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:17:29.891097   10129 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:17:29.891250   10129 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:17:29.891253   10129 out.go:358] Setting ErrFile to fd 2...
	I1028 05:17:29.891255   10129 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:17:29.891438   10129 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:17:29.892643   10129 out.go:352] Setting JSON to false
	I1028 05:17:29.911343   10129 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6420,"bootTime":1730111429,"procs":519,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:17:29.911432   10129 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:17:29.917116   10129 out.go:177] * [false-181000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:17:29.923970   10129 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:17:29.924030   10129 notify.go:220] Checking for updates...
	I1028 05:17:29.930023   10129 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:17:29.932990   10129 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:17:29.936035   10129 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:17:29.938963   10129 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:17:29.940069   10129 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:17:29.943333   10129 config.go:182] Loaded profile config "multinode-268000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:17:29.943403   10129 config.go:182] Loaded profile config "stopped-upgrade-451000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 05:17:29.943442   10129 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:17:29.947975   10129 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 05:17:29.953009   10129 start.go:297] selected driver: qemu2
	I1028 05:17:29.953016   10129 start.go:901] validating driver "qemu2" against <nil>
	I1028 05:17:29.953022   10129 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:17:29.955390   10129 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 05:17:29.959917   10129 out.go:177] * Automatically selected the socket_vmnet network
	I1028 05:17:29.961311   10129 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 05:17:29.961330   10129 cni.go:84] Creating CNI manager for "false"
	I1028 05:17:29.961355   10129 start.go:340] cluster config:
	{Name:false-181000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:false-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:17:29.965601   10129 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:17:29.974036   10129 out.go:177] * Starting "false-181000" primary control-plane node in "false-181000" cluster
	I1028 05:17:29.977951   10129 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:17:29.977967   10129 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 05:17:29.977975   10129 cache.go:56] Caching tarball of preloaded images
	I1028 05:17:29.978045   10129 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:17:29.978052   10129 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 05:17:29.978120   10129 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/false-181000/config.json ...
	I1028 05:17:29.978133   10129 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/false-181000/config.json: {Name:mk58be909525ef8cb1c7a1e54491d49e2e21e8f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:17:29.978380   10129 start.go:360] acquireMachinesLock for false-181000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:17:29.978422   10129 start.go:364] duration metric: took 37.167µs to acquireMachinesLock for "false-181000"
	I1028 05:17:29.978433   10129 start.go:93] Provisioning new machine with config: &{Name:false-181000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:false-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:17:29.978464   10129 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:17:29.982804   10129 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 05:17:29.998252   10129 start.go:159] libmachine.API.Create for "false-181000" (driver="qemu2")
	I1028 05:17:29.998274   10129 client.go:168] LocalClient.Create starting
	I1028 05:17:29.998342   10129 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:17:29.998379   10129 main.go:141] libmachine: Decoding PEM data...
	I1028 05:17:29.998393   10129 main.go:141] libmachine: Parsing certificate...
	I1028 05:17:29.998432   10129 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:17:29.998461   10129 main.go:141] libmachine: Decoding PEM data...
	I1028 05:17:29.998469   10129 main.go:141] libmachine: Parsing certificate...
	I1028 05:17:29.998846   10129 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:17:30.158913   10129 main.go:141] libmachine: Creating SSH key...
	I1028 05:17:30.209431   10129 main.go:141] libmachine: Creating Disk image...
	I1028 05:17:30.209437   10129 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:17:30.209618   10129 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/false-181000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/false-181000/disk.qcow2
	I1028 05:17:30.219401   10129 main.go:141] libmachine: STDOUT: 
	I1028 05:17:30.219429   10129 main.go:141] libmachine: STDERR: 
	I1028 05:17:30.219486   10129 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/false-181000/disk.qcow2 +20000M
	I1028 05:17:30.228477   10129 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:17:30.228507   10129 main.go:141] libmachine: STDERR: 
	I1028 05:17:30.228523   10129 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/false-181000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/false-181000/disk.qcow2
	I1028 05:17:30.228529   10129 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:17:30.228542   10129 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:17:30.228569   10129 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/false-181000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/false-181000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/false-181000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:ee:fc:36:94:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/false-181000/disk.qcow2
	I1028 05:17:30.230516   10129 main.go:141] libmachine: STDOUT: 
	I1028 05:17:30.230530   10129 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:17:30.230549   10129 client.go:171] duration metric: took 232.274292ms to LocalClient.Create
	I1028 05:17:32.232722   10129 start.go:128] duration metric: took 2.2542775s to createHost
	I1028 05:17:32.232822   10129 start.go:83] releasing machines lock for "false-181000", held for 2.254438625s
	W1028 05:17:32.232899   10129 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:17:32.244164   10129 out.go:177] * Deleting "false-181000" in qemu2 ...
	W1028 05:17:32.267292   10129 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:17:32.267319   10129 start.go:729] Will try again in 5 seconds ...
	I1028 05:17:37.269447   10129 start.go:360] acquireMachinesLock for false-181000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:17:37.270023   10129 start.go:364] duration metric: took 426.625µs to acquireMachinesLock for "false-181000"
	I1028 05:17:37.270181   10129 start.go:93] Provisioning new machine with config: &{Name:false-181000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:false-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:17:37.270452   10129 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:17:37.277050   10129 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 05:17:37.317276   10129 start.go:159] libmachine.API.Create for "false-181000" (driver="qemu2")
	I1028 05:17:37.317320   10129 client.go:168] LocalClient.Create starting
	I1028 05:17:37.317457   10129 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:17:37.317532   10129 main.go:141] libmachine: Decoding PEM data...
	I1028 05:17:37.317552   10129 main.go:141] libmachine: Parsing certificate...
	I1028 05:17:37.317613   10129 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:17:37.317661   10129 main.go:141] libmachine: Decoding PEM data...
	I1028 05:17:37.317672   10129 main.go:141] libmachine: Parsing certificate...
	I1028 05:17:37.318204   10129 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:17:37.483480   10129 main.go:141] libmachine: Creating SSH key...
	I1028 05:17:37.507511   10129 main.go:141] libmachine: Creating Disk image...
	I1028 05:17:37.507517   10129 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:17:37.507693   10129 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/false-181000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/false-181000/disk.qcow2
	I1028 05:17:37.517644   10129 main.go:141] libmachine: STDOUT: 
	I1028 05:17:37.517667   10129 main.go:141] libmachine: STDERR: 
	I1028 05:17:37.517720   10129 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/false-181000/disk.qcow2 +20000M
	I1028 05:17:37.526420   10129 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:17:37.526442   10129 main.go:141] libmachine: STDERR: 
	I1028 05:17:37.526453   10129 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/false-181000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/false-181000/disk.qcow2
	I1028 05:17:37.526457   10129 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:17:37.526465   10129 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:17:37.526487   10129 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/false-181000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/false-181000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/false-181000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:44:24:c4:8b:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/false-181000/disk.qcow2
	I1028 05:17:37.528485   10129 main.go:141] libmachine: STDOUT: 
	I1028 05:17:37.528500   10129 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:17:37.528511   10129 client.go:171] duration metric: took 211.189625ms to LocalClient.Create
	I1028 05:17:39.530685   10129 start.go:128] duration metric: took 2.260246459s to createHost
	I1028 05:17:39.530802   10129 start.go:83] releasing machines lock for "false-181000", held for 2.260777167s
	W1028 05:17:39.531268   10129 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-181000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-181000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:17:39.541090   10129 out.go:201] 
	W1028 05:17:39.550151   10129 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:17:39.550185   10129 out.go:270] * 
	* 
	W1028 05:17:39.552846   10129 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:17:39.565991   10129 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-181000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-181000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.9195655s)

                                                
                                                
-- stdout --
	* [enable-default-cni-181000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-181000" primary control-plane node in "enable-default-cni-181000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-181000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:17:41.883217   10238 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:17:41.883373   10238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:17:41.883379   10238 out.go:358] Setting ErrFile to fd 2...
	I1028 05:17:41.883382   10238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:17:41.883513   10238 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:17:41.884665   10238 out.go:352] Setting JSON to false
	I1028 05:17:41.903950   10238 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6432,"bootTime":1730111429,"procs":519,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:17:41.904020   10238 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:17:41.909114   10238 out.go:177] * [enable-default-cni-181000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:17:41.919111   10238 notify.go:220] Checking for updates...
	I1028 05:17:41.925119   10238 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:17:41.933022   10238 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:17:41.936069   10238 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:17:41.940073   10238 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:17:41.943099   10238 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:17:41.946004   10238 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:17:41.949457   10238 config.go:182] Loaded profile config "multinode-268000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:17:41.949527   10238 config.go:182] Loaded profile config "stopped-upgrade-451000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 05:17:41.949575   10238 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:17:41.954080   10238 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 05:17:41.961020   10238 start.go:297] selected driver: qemu2
	I1028 05:17:41.961025   10238 start.go:901] validating driver "qemu2" against <nil>
	I1028 05:17:41.961032   10238 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:17:41.963469   10238 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 05:17:41.965986   10238 out.go:177] * Automatically selected the socket_vmnet network
	E1028 05:17:41.968971   10238 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1028 05:17:41.968985   10238 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 05:17:41.969004   10238 cni.go:84] Creating CNI manager for "bridge"
	I1028 05:17:41.969011   10238 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 05:17:41.969047   10238 start.go:340] cluster config:
	{Name:enable-default-cni-181000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:17:41.973552   10238 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:17:41.982028   10238 out.go:177] * Starting "enable-default-cni-181000" primary control-plane node in "enable-default-cni-181000" cluster
	I1028 05:17:41.985982   10238 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:17:41.985995   10238 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 05:17:41.986003   10238 cache.go:56] Caching tarball of preloaded images
	I1028 05:17:41.986069   10238 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:17:41.986075   10238 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 05:17:41.986144   10238 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/enable-default-cni-181000/config.json ...
	I1028 05:17:41.986158   10238 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/enable-default-cni-181000/config.json: {Name:mk36d235f08ec0d2bd3365b47a8e48e442c954cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:17:41.986402   10238 start.go:360] acquireMachinesLock for enable-default-cni-181000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:17:41.986450   10238 start.go:364] duration metric: took 41.917µs to acquireMachinesLock for "enable-default-cni-181000"
	I1028 05:17:41.986462   10238 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-181000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:17:41.986490   10238 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:17:41.989926   10238 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 05:17:42.004204   10238 start.go:159] libmachine.API.Create for "enable-default-cni-181000" (driver="qemu2")
	I1028 05:17:42.004230   10238 client.go:168] LocalClient.Create starting
	I1028 05:17:42.004302   10238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:17:42.004346   10238 main.go:141] libmachine: Decoding PEM data...
	I1028 05:17:42.004357   10238 main.go:141] libmachine: Parsing certificate...
	I1028 05:17:42.004401   10238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:17:42.004429   10238 main.go:141] libmachine: Decoding PEM data...
	I1028 05:17:42.004438   10238 main.go:141] libmachine: Parsing certificate...
	I1028 05:17:42.004799   10238 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:17:42.161715   10238 main.go:141] libmachine: Creating SSH key...
	I1028 05:17:42.316054   10238 main.go:141] libmachine: Creating Disk image...
	I1028 05:17:42.316065   10238 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:17:42.316291   10238 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/enable-default-cni-181000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/enable-default-cni-181000/disk.qcow2
	I1028 05:17:42.326722   10238 main.go:141] libmachine: STDOUT: 
	I1028 05:17:42.326739   10238 main.go:141] libmachine: STDERR: 
	I1028 05:17:42.326807   10238 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/enable-default-cni-181000/disk.qcow2 +20000M
	I1028 05:17:42.335702   10238 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:17:42.335717   10238 main.go:141] libmachine: STDERR: 
	I1028 05:17:42.335742   10238 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/enable-default-cni-181000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/enable-default-cni-181000/disk.qcow2
	I1028 05:17:42.335747   10238 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:17:42.335759   10238 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:17:42.335807   10238 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/enable-default-cni-181000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/enable-default-cni-181000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/enable-default-cni-181000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:9a:28:92:f1:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/enable-default-cni-181000/disk.qcow2
	I1028 05:17:42.337710   10238 main.go:141] libmachine: STDOUT: 
	I1028 05:17:42.337728   10238 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:17:42.337746   10238 client.go:171] duration metric: took 333.518166ms to LocalClient.Create
	I1028 05:17:44.339807   10238 start.go:128] duration metric: took 2.353355792s to createHost
	I1028 05:17:44.339840   10238 start.go:83] releasing machines lock for "enable-default-cni-181000", held for 2.353434167s
	W1028 05:17:44.339868   10238 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:17:44.349977   10238 out.go:177] * Deleting "enable-default-cni-181000" in qemu2 ...
	W1028 05:17:44.371200   10238 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:17:44.371215   10238 start.go:729] Will try again in 5 seconds ...
	I1028 05:17:49.373359   10238 start.go:360] acquireMachinesLock for enable-default-cni-181000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:17:49.373985   10238 start.go:364] duration metric: took 520.75µs to acquireMachinesLock for "enable-default-cni-181000"
	I1028 05:17:49.374053   10238 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-181000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:17:49.374427   10238 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:17:49.387940   10238 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 05:17:49.436839   10238 start.go:159] libmachine.API.Create for "enable-default-cni-181000" (driver="qemu2")
	I1028 05:17:49.436888   10238 client.go:168] LocalClient.Create starting
	I1028 05:17:49.437036   10238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:17:49.437133   10238 main.go:141] libmachine: Decoding PEM data...
	I1028 05:17:49.437155   10238 main.go:141] libmachine: Parsing certificate...
	I1028 05:17:49.437215   10238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:17:49.437273   10238 main.go:141] libmachine: Decoding PEM data...
	I1028 05:17:49.437285   10238 main.go:141] libmachine: Parsing certificate...
	I1028 05:17:49.437891   10238 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:17:49.607837   10238 main.go:141] libmachine: Creating SSH key...
	I1028 05:17:49.707918   10238 main.go:141] libmachine: Creating Disk image...
	I1028 05:17:49.707924   10238 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:17:49.708109   10238 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/enable-default-cni-181000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/enable-default-cni-181000/disk.qcow2
	I1028 05:17:49.718260   10238 main.go:141] libmachine: STDOUT: 
	I1028 05:17:49.718274   10238 main.go:141] libmachine: STDERR: 
	I1028 05:17:49.718344   10238 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/enable-default-cni-181000/disk.qcow2 +20000M
	I1028 05:17:49.726790   10238 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:17:49.726805   10238 main.go:141] libmachine: STDERR: 
	I1028 05:17:49.726822   10238 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/enable-default-cni-181000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/enable-default-cni-181000/disk.qcow2
	I1028 05:17:49.726829   10238 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:17:49.726838   10238 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:17:49.726880   10238 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/enable-default-cni-181000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/enable-default-cni-181000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/enable-default-cni-181000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:fe:33:16:77:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/enable-default-cni-181000/disk.qcow2
	I1028 05:17:49.728743   10238 main.go:141] libmachine: STDOUT: 
	I1028 05:17:49.728757   10238 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:17:49.728769   10238 client.go:171] duration metric: took 291.88075ms to LocalClient.Create
	I1028 05:17:51.730912   10238 start.go:128] duration metric: took 2.356500792s to createHost
	I1028 05:17:51.730973   10238 start.go:83] releasing machines lock for "enable-default-cni-181000", held for 2.357015708s
	W1028 05:17:51.731359   10238 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-181000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-181000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:17:51.739090   10238 out.go:201] 
	W1028 05:17:51.744059   10238 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:17:51.744081   10238 out.go:270] * 
	* 
	W1028 05:17:51.746103   10238 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:17:51.757958   10238 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-181000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-181000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.834577291s)

                                                
                                                
-- stdout --
	* [flannel-181000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-181000" primary control-plane node in "flannel-181000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-181000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:17:54.066453   10351 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:17:54.066614   10351 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:17:54.066617   10351 out.go:358] Setting ErrFile to fd 2...
	I1028 05:17:54.066624   10351 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:17:54.066751   10351 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:17:54.067930   10351 out.go:352] Setting JSON to false
	I1028 05:17:54.085858   10351 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6445,"bootTime":1730111429,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:17:54.085934   10351 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:17:54.093052   10351 out.go:177] * [flannel-181000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:17:54.100979   10351 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:17:54.101039   10351 notify.go:220] Checking for updates...
	I1028 05:17:54.107967   10351 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:17:54.111949   10351 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:17:54.115955   10351 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:17:54.118903   10351 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:17:54.121948   10351 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:17:54.125316   10351 config.go:182] Loaded profile config "multinode-268000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:17:54.125389   10351 config.go:182] Loaded profile config "stopped-upgrade-451000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 05:17:54.125437   10351 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:17:54.129927   10351 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 05:17:54.136925   10351 start.go:297] selected driver: qemu2
	I1028 05:17:54.136931   10351 start.go:901] validating driver "qemu2" against <nil>
	I1028 05:17:54.136936   10351 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:17:54.139523   10351 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 05:17:54.142937   10351 out.go:177] * Automatically selected the socket_vmnet network
	I1028 05:17:54.147008   10351 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 05:17:54.147030   10351 cni.go:84] Creating CNI manager for "flannel"
	I1028 05:17:54.147037   10351 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1028 05:17:54.147076   10351 start.go:340] cluster config:
	{Name:flannel-181000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:flannel-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:17:54.151857   10351 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:17:54.159909   10351 out.go:177] * Starting "flannel-181000" primary control-plane node in "flannel-181000" cluster
	I1028 05:17:54.163937   10351 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:17:54.163951   10351 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 05:17:54.163960   10351 cache.go:56] Caching tarball of preloaded images
	I1028 05:17:54.164036   10351 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:17:54.164042   10351 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 05:17:54.164104   10351 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/flannel-181000/config.json ...
	I1028 05:17:54.164118   10351 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/flannel-181000/config.json: {Name:mk6fb5c9dc417554d3f2c90c9a2c1bfcaed2e3e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:17:54.164455   10351 start.go:360] acquireMachinesLock for flannel-181000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:17:54.164501   10351 start.go:364] duration metric: took 40.583µs to acquireMachinesLock for "flannel-181000"
	I1028 05:17:54.164512   10351 start.go:93] Provisioning new machine with config: &{Name:flannel-181000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:flannel-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:17:54.164539   10351 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:17:54.168969   10351 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 05:17:54.184755   10351 start.go:159] libmachine.API.Create for "flannel-181000" (driver="qemu2")
	I1028 05:17:54.184782   10351 client.go:168] LocalClient.Create starting
	I1028 05:17:54.184850   10351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:17:54.184886   10351 main.go:141] libmachine: Decoding PEM data...
	I1028 05:17:54.184901   10351 main.go:141] libmachine: Parsing certificate...
	I1028 05:17:54.184941   10351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:17:54.184970   10351 main.go:141] libmachine: Decoding PEM data...
	I1028 05:17:54.184977   10351 main.go:141] libmachine: Parsing certificate...
	I1028 05:17:54.185320   10351 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:17:54.344875   10351 main.go:141] libmachine: Creating SSH key...
	I1028 05:17:54.430728   10351 main.go:141] libmachine: Creating Disk image...
	I1028 05:17:54.430738   10351 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:17:54.430974   10351 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/flannel-181000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/flannel-181000/disk.qcow2
	I1028 05:17:54.441889   10351 main.go:141] libmachine: STDOUT: 
	I1028 05:17:54.441909   10351 main.go:141] libmachine: STDERR: 
	I1028 05:17:54.441986   10351 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/flannel-181000/disk.qcow2 +20000M
	I1028 05:17:54.451559   10351 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:17:54.451593   10351 main.go:141] libmachine: STDERR: 
	I1028 05:17:54.451613   10351 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/flannel-181000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/flannel-181000/disk.qcow2
	I1028 05:17:54.451618   10351 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:17:54.451632   10351 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:17:54.451662   10351 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/flannel-181000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/flannel-181000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/flannel-181000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:8c:28:e5:06:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/flannel-181000/disk.qcow2
	I1028 05:17:54.453700   10351 main.go:141] libmachine: STDOUT: 
	I1028 05:17:54.453720   10351 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:17:54.453738   10351 client.go:171] duration metric: took 268.955292ms to LocalClient.Create
	I1028 05:17:56.455880   10351 start.go:128] duration metric: took 2.291353334s to createHost
	I1028 05:17:56.455925   10351 start.go:83] releasing machines lock for "flannel-181000", held for 2.291468417s
	W1028 05:17:56.455953   10351 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:17:56.467016   10351 out.go:177] * Deleting "flannel-181000" in qemu2 ...
	W1028 05:17:56.488378   10351 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:17:56.488394   10351 start.go:729] Will try again in 5 seconds ...
	I1028 05:18:01.490524   10351 start.go:360] acquireMachinesLock for flannel-181000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:18:01.491007   10351 start.go:364] duration metric: took 406.334µs to acquireMachinesLock for "flannel-181000"
	I1028 05:18:01.491147   10351 start.go:93] Provisioning new machine with config: &{Name:flannel-181000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:flannel-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:18:01.491412   10351 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:18:01.500918   10351 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 05:18:01.547791   10351 start.go:159] libmachine.API.Create for "flannel-181000" (driver="qemu2")
	I1028 05:18:01.547849   10351 client.go:168] LocalClient.Create starting
	I1028 05:18:01.548012   10351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:18:01.548100   10351 main.go:141] libmachine: Decoding PEM data...
	I1028 05:18:01.548121   10351 main.go:141] libmachine: Parsing certificate...
	I1028 05:18:01.548186   10351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:18:01.548248   10351 main.go:141] libmachine: Decoding PEM data...
	I1028 05:18:01.548260   10351 main.go:141] libmachine: Parsing certificate...
	I1028 05:18:01.548995   10351 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:18:01.713280   10351 main.go:141] libmachine: Creating SSH key...
	I1028 05:18:01.801408   10351 main.go:141] libmachine: Creating Disk image...
	I1028 05:18:01.801416   10351 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:18:01.801613   10351 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/flannel-181000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/flannel-181000/disk.qcow2
	I1028 05:18:01.811496   10351 main.go:141] libmachine: STDOUT: 
	I1028 05:18:01.811518   10351 main.go:141] libmachine: STDERR: 
	I1028 05:18:01.811588   10351 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/flannel-181000/disk.qcow2 +20000M
	I1028 05:18:01.820208   10351 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:18:01.820224   10351 main.go:141] libmachine: STDERR: 
	I1028 05:18:01.820239   10351 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/flannel-181000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/flannel-181000/disk.qcow2
	I1028 05:18:01.820245   10351 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:18:01.820254   10351 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:18:01.820278   10351 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/flannel-181000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/flannel-181000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/flannel-181000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:9a:0f:d6:c3:68 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/flannel-181000/disk.qcow2
	I1028 05:18:01.822168   10351 main.go:141] libmachine: STDOUT: 
	I1028 05:18:01.822186   10351 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:18:01.822199   10351 client.go:171] duration metric: took 274.348875ms to LocalClient.Create
	I1028 05:18:03.824360   10351 start.go:128] duration metric: took 2.332955167s to createHost
	I1028 05:18:03.824465   10351 start.go:83] releasing machines lock for "flannel-181000", held for 2.333487541s
	W1028 05:18:03.824915   10351 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-181000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-181000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:18:03.838286   10351 out.go:201] 
	W1028 05:18:03.842599   10351 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:18:03.842643   10351 out.go:270] * 
	* 
	W1028 05:18:03.845580   10351 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:18:03.854449   10351 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-181000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-181000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.878553041s)

                                                
                                                
-- stdout --
	* [bridge-181000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-181000" primary control-plane node in "bridge-181000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-181000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:18:06.379467   10470 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:18:06.379620   10470 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:18:06.379623   10470 out.go:358] Setting ErrFile to fd 2...
	I1028 05:18:06.379625   10470 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:18:06.379752   10470 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:18:06.380967   10470 out.go:352] Setting JSON to false
	I1028 05:18:06.398857   10470 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6457,"bootTime":1730111429,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:18:06.398939   10470 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:18:06.404899   10470 out.go:177] * [bridge-181000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:18:06.411842   10470 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:18:06.411891   10470 notify.go:220] Checking for updates...
	I1028 05:18:06.418905   10470 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:18:06.421816   10470 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:18:06.424846   10470 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:18:06.427906   10470 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:18:06.429247   10470 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:18:06.432209   10470 config.go:182] Loaded profile config "multinode-268000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:18:06.432280   10470 config.go:182] Loaded profile config "stopped-upgrade-451000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 05:18:06.432328   10470 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:18:06.436879   10470 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 05:18:06.441857   10470 start.go:297] selected driver: qemu2
	I1028 05:18:06.441864   10470 start.go:901] validating driver "qemu2" against <nil>
	I1028 05:18:06.441871   10470 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:18:06.444250   10470 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 05:18:06.446845   10470 out.go:177] * Automatically selected the socket_vmnet network
	I1028 05:18:06.450019   10470 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 05:18:06.450036   10470 cni.go:84] Creating CNI manager for "bridge"
	I1028 05:18:06.450039   10470 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 05:18:06.450070   10470 start.go:340] cluster config:
	{Name:bridge-181000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:bridge-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:18:06.454431   10470 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:18:06.462895   10470 out.go:177] * Starting "bridge-181000" primary control-plane node in "bridge-181000" cluster
	I1028 05:18:06.466853   10470 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:18:06.466869   10470 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 05:18:06.466880   10470 cache.go:56] Caching tarball of preloaded images
	I1028 05:18:06.466949   10470 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:18:06.466954   10470 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 05:18:06.467016   10470 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/bridge-181000/config.json ...
	I1028 05:18:06.467026   10470 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/bridge-181000/config.json: {Name:mk1578c7f0fabfe89850c971130e3a1db5b2bfa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:18:06.467256   10470 start.go:360] acquireMachinesLock for bridge-181000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:18:06.467298   10470 start.go:364] duration metric: took 37.042µs to acquireMachinesLock for "bridge-181000"
	I1028 05:18:06.467308   10470 start.go:93] Provisioning new machine with config: &{Name:bridge-181000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:bridge-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:18:06.467335   10470 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:18:06.475860   10470 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 05:18:06.491246   10470 start.go:159] libmachine.API.Create for "bridge-181000" (driver="qemu2")
	I1028 05:18:06.491271   10470 client.go:168] LocalClient.Create starting
	I1028 05:18:06.491357   10470 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:18:06.491392   10470 main.go:141] libmachine: Decoding PEM data...
	I1028 05:18:06.491402   10470 main.go:141] libmachine: Parsing certificate...
	I1028 05:18:06.491436   10470 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:18:06.491464   10470 main.go:141] libmachine: Decoding PEM data...
	I1028 05:18:06.491472   10470 main.go:141] libmachine: Parsing certificate...
	I1028 05:18:06.491813   10470 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:18:06.649655   10470 main.go:141] libmachine: Creating SSH key...
	I1028 05:18:06.746857   10470 main.go:141] libmachine: Creating Disk image...
	I1028 05:18:06.746864   10470 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:18:06.747060   10470 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/bridge-181000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/bridge-181000/disk.qcow2
	I1028 05:18:06.757380   10470 main.go:141] libmachine: STDOUT: 
	I1028 05:18:06.757400   10470 main.go:141] libmachine: STDERR: 
	I1028 05:18:06.757452   10470 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/bridge-181000/disk.qcow2 +20000M
	I1028 05:18:06.766257   10470 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:18:06.766272   10470 main.go:141] libmachine: STDERR: 
	I1028 05:18:06.766290   10470 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/bridge-181000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/bridge-181000/disk.qcow2
	I1028 05:18:06.766296   10470 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:18:06.766309   10470 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:18:06.766344   10470 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/bridge-181000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/bridge-181000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/bridge-181000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:2f:11:e5:5b:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/bridge-181000/disk.qcow2
	I1028 05:18:06.768240   10470 main.go:141] libmachine: STDOUT: 
	I1028 05:18:06.768253   10470 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:18:06.768270   10470 client.go:171] duration metric: took 277.000125ms to LocalClient.Create
	I1028 05:18:08.770412   10470 start.go:128] duration metric: took 2.303098292s to createHost
	I1028 05:18:08.770504   10470 start.go:83] releasing machines lock for "bridge-181000", held for 2.303248375s
	W1028 05:18:08.770586   10470 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:18:08.781454   10470 out.go:177] * Deleting "bridge-181000" in qemu2 ...
	W1028 05:18:08.812769   10470 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:18:08.812799   10470 start.go:729] Will try again in 5 seconds ...
	I1028 05:18:13.814998   10470 start.go:360] acquireMachinesLock for bridge-181000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:18:13.815608   10470 start.go:364] duration metric: took 506.584µs to acquireMachinesLock for "bridge-181000"
	I1028 05:18:13.815686   10470 start.go:93] Provisioning new machine with config: &{Name:bridge-181000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:bridge-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:18:13.815986   10470 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:18:13.830374   10470 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 05:18:13.877602   10470 start.go:159] libmachine.API.Create for "bridge-181000" (driver="qemu2")
	I1028 05:18:13.877664   10470 client.go:168] LocalClient.Create starting
	I1028 05:18:13.877793   10470 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:18:13.877896   10470 main.go:141] libmachine: Decoding PEM data...
	I1028 05:18:13.877914   10470 main.go:141] libmachine: Parsing certificate...
	I1028 05:18:13.877978   10470 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:18:13.878065   10470 main.go:141] libmachine: Decoding PEM data...
	I1028 05:18:13.878078   10470 main.go:141] libmachine: Parsing certificate...
	I1028 05:18:13.878810   10470 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:18:14.043527   10470 main.go:141] libmachine: Creating SSH key...
	I1028 05:18:14.161933   10470 main.go:141] libmachine: Creating Disk image...
	I1028 05:18:14.161940   10470 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:18:14.162146   10470 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/bridge-181000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/bridge-181000/disk.qcow2
	I1028 05:18:14.172272   10470 main.go:141] libmachine: STDOUT: 
	I1028 05:18:14.172292   10470 main.go:141] libmachine: STDERR: 
	I1028 05:18:14.172360   10470 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/bridge-181000/disk.qcow2 +20000M
	I1028 05:18:14.182144   10470 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:18:14.182170   10470 main.go:141] libmachine: STDERR: 
	I1028 05:18:14.182183   10470 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/bridge-181000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/bridge-181000/disk.qcow2
	I1028 05:18:14.182188   10470 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:18:14.182200   10470 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:18:14.182228   10470 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/bridge-181000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/bridge-181000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/bridge-181000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:c6:be:75:a7:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/bridge-181000/disk.qcow2
	I1028 05:18:14.184399   10470 main.go:141] libmachine: STDOUT: 
	I1028 05:18:14.184415   10470 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:18:14.184427   10470 client.go:171] duration metric: took 306.76175ms to LocalClient.Create
	I1028 05:18:16.186590   10470 start.go:128] duration metric: took 2.370615667s to createHost
	I1028 05:18:16.186699   10470 start.go:83] releasing machines lock for "bridge-181000", held for 2.37111675s
	W1028 05:18:16.187101   10470 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-181000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-181000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:18:16.202782   10470 out.go:201] 
	W1028 05:18:16.205980   10470 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:18:16.206042   10470 out.go:270] * 
	* 
	W1028 05:18:16.207673   10470 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:18:16.215733   10470 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-181000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-181000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.915816916s)

                                                
                                                
-- stdout --
	* [kubenet-181000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-181000" primary control-plane node in "kubenet-181000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-181000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:18:18.615719   10581 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:18:18.615870   10581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:18:18.615876   10581 out.go:358] Setting ErrFile to fd 2...
	I1028 05:18:18.615878   10581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:18:18.615995   10581 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:18:18.617147   10581 out.go:352] Setting JSON to false
	I1028 05:18:18.635307   10581 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6469,"bootTime":1730111429,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:18:18.635383   10581 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:18:18.641929   10581 out.go:177] * [kubenet-181000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:18:18.650958   10581 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:18:18.651008   10581 notify.go:220] Checking for updates...
	I1028 05:18:18.657743   10581 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:18:18.661916   10581 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:18:18.665906   10581 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:18:18.669876   10581 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:18:18.673951   10581 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:18:18.678208   10581 config.go:182] Loaded profile config "multinode-268000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:18:18.678288   10581 config.go:182] Loaded profile config "stopped-upgrade-451000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 05:18:18.678340   10581 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:18:18.681924   10581 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 05:18:18.689961   10581 start.go:297] selected driver: qemu2
	I1028 05:18:18.689969   10581 start.go:901] validating driver "qemu2" against <nil>
	I1028 05:18:18.689977   10581 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:18:18.692592   10581 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 05:18:18.696848   10581 out.go:177] * Automatically selected the socket_vmnet network
	I1028 05:18:18.701019   10581 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 05:18:18.701051   10581 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1028 05:18:18.701088   10581 start.go:340] cluster config:
	{Name:kubenet-181000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubenet-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:18:18.706067   10581 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:18:18.713907   10581 out.go:177] * Starting "kubenet-181000" primary control-plane node in "kubenet-181000" cluster
	I1028 05:18:18.717983   10581 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:18:18.718001   10581 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 05:18:18.718015   10581 cache.go:56] Caching tarball of preloaded images
	I1028 05:18:18.718105   10581 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:18:18.718112   10581 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 05:18:18.718195   10581 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/kubenet-181000/config.json ...
	I1028 05:18:18.718207   10581 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/kubenet-181000/config.json: {Name:mk0a799ad72f2c76fa399510a50ef5ce200d60f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:18:18.718596   10581 start.go:360] acquireMachinesLock for kubenet-181000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:18:18.718648   10581 start.go:364] duration metric: took 45.166µs to acquireMachinesLock for "kubenet-181000"
	I1028 05:18:18.718660   10581 start.go:93] Provisioning new machine with config: &{Name:kubenet-181000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kubenet-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:18:18.718697   10581 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:18:18.726750   10581 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 05:18:18.745387   10581 start.go:159] libmachine.API.Create for "kubenet-181000" (driver="qemu2")
	I1028 05:18:18.745415   10581 client.go:168] LocalClient.Create starting
	I1028 05:18:18.745501   10581 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:18:18.745542   10581 main.go:141] libmachine: Decoding PEM data...
	I1028 05:18:18.745554   10581 main.go:141] libmachine: Parsing certificate...
	I1028 05:18:18.745601   10581 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:18:18.745632   10581 main.go:141] libmachine: Decoding PEM data...
	I1028 05:18:18.745644   10581 main.go:141] libmachine: Parsing certificate...
	I1028 05:18:18.746075   10581 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:18:18.903177   10581 main.go:141] libmachine: Creating SSH key...
	I1028 05:18:19.045910   10581 main.go:141] libmachine: Creating Disk image...
	I1028 05:18:19.045922   10581 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:18:19.046144   10581 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubenet-181000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubenet-181000/disk.qcow2
	I1028 05:18:19.056360   10581 main.go:141] libmachine: STDOUT: 
	I1028 05:18:19.056383   10581 main.go:141] libmachine: STDERR: 
	I1028 05:18:19.056442   10581 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubenet-181000/disk.qcow2 +20000M
	I1028 05:18:19.065137   10581 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:18:19.065153   10581 main.go:141] libmachine: STDERR: 
	I1028 05:18:19.065175   10581 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubenet-181000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubenet-181000/disk.qcow2
	I1028 05:18:19.065180   10581 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:18:19.065191   10581 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:18:19.065224   10581 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubenet-181000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubenet-181000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubenet-181000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:ac:8b:e1:84:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubenet-181000/disk.qcow2
	I1028 05:18:19.067141   10581 main.go:141] libmachine: STDOUT: 
	I1028 05:18:19.067154   10581 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:18:19.067175   10581 client.go:171] duration metric: took 321.759917ms to LocalClient.Create
	I1028 05:18:21.069264   10581 start.go:128] duration metric: took 2.350601625s to createHost
	I1028 05:18:21.069303   10581 start.go:83] releasing machines lock for "kubenet-181000", held for 2.350699333s
	W1028 05:18:21.069342   10581 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:18:21.081047   10581 out.go:177] * Deleting "kubenet-181000" in qemu2 ...
	W1028 05:18:21.108775   10581 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:18:21.108799   10581 start.go:729] Will try again in 5 seconds ...
	I1028 05:18:26.110775   10581 start.go:360] acquireMachinesLock for kubenet-181000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:18:26.110902   10581 start.go:364] duration metric: took 104.542µs to acquireMachinesLock for "kubenet-181000"
	I1028 05:18:26.110926   10581 start.go:93] Provisioning new machine with config: &{Name:kubenet-181000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kubenet-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:18:26.111003   10581 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:18:26.121244   10581 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 05:18:26.136345   10581 start.go:159] libmachine.API.Create for "kubenet-181000" (driver="qemu2")
	I1028 05:18:26.136375   10581 client.go:168] LocalClient.Create starting
	I1028 05:18:26.136447   10581 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:18:26.136493   10581 main.go:141] libmachine: Decoding PEM data...
	I1028 05:18:26.136505   10581 main.go:141] libmachine: Parsing certificate...
	I1028 05:18:26.136540   10581 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:18:26.136569   10581 main.go:141] libmachine: Decoding PEM data...
	I1028 05:18:26.136584   10581 main.go:141] libmachine: Parsing certificate...
	I1028 05:18:26.137031   10581 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:18:26.295844   10581 main.go:141] libmachine: Creating SSH key...
	I1028 05:18:26.434481   10581 main.go:141] libmachine: Creating Disk image...
	I1028 05:18:26.434495   10581 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:18:26.434709   10581 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubenet-181000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubenet-181000/disk.qcow2
	I1028 05:18:26.445010   10581 main.go:141] libmachine: STDOUT: 
	I1028 05:18:26.445035   10581 main.go:141] libmachine: STDERR: 
	I1028 05:18:26.445091   10581 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubenet-181000/disk.qcow2 +20000M
	I1028 05:18:26.453497   10581 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:18:26.453512   10581 main.go:141] libmachine: STDERR: 
	I1028 05:18:26.453525   10581 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubenet-181000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubenet-181000/disk.qcow2
	I1028 05:18:26.453529   10581 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:18:26.453541   10581 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:18:26.453564   10581 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubenet-181000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubenet-181000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubenet-181000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:8f:d0:16:fa:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/kubenet-181000/disk.qcow2
	I1028 05:18:26.455446   10581 main.go:141] libmachine: STDOUT: 
	I1028 05:18:26.455459   10581 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:18:26.455470   10581 client.go:171] duration metric: took 319.0955ms to LocalClient.Create
	I1028 05:18:28.457626   10581 start.go:128] duration metric: took 2.346646833s to createHost
	I1028 05:18:28.457695   10581 start.go:83] releasing machines lock for "kubenet-181000", held for 2.346832875s
	W1028 05:18:28.458026   10581 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-181000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-181000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:18:28.468656   10581 out.go:201] 
	W1028 05:18:28.471653   10581 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:18:28.471671   10581 out.go:270] * 
	* 
	W1028 05:18:28.473172   10581 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:18:28.485609   10581 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-180000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-180000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.830035958s)

                                                
                                                
-- stdout --
	* [old-k8s-version-180000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-180000" primary control-plane node in "old-k8s-version-180000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-180000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:18:30.853218   10692 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:18:30.853370   10692 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:18:30.853373   10692 out.go:358] Setting ErrFile to fd 2...
	I1028 05:18:30.853376   10692 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:18:30.853522   10692 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:18:30.854704   10692 out.go:352] Setting JSON to false
	I1028 05:18:30.872608   10692 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6481,"bootTime":1730111429,"procs":518,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:18:30.872680   10692 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:18:30.879168   10692 out.go:177] * [old-k8s-version-180000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:18:30.887070   10692 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:18:30.887131   10692 notify.go:220] Checking for updates...
	I1028 05:18:30.894075   10692 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:18:30.897022   10692 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:18:30.900145   10692 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:18:30.903155   10692 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:18:30.906140   10692 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:18:30.909463   10692 config.go:182] Loaded profile config "multinode-268000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:18:30.909544   10692 config.go:182] Loaded profile config "stopped-upgrade-451000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 05:18:30.909579   10692 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:18:30.914201   10692 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 05:18:30.921076   10692 start.go:297] selected driver: qemu2
	I1028 05:18:30.921081   10692 start.go:901] validating driver "qemu2" against <nil>
	I1028 05:18:30.921086   10692 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:18:30.923584   10692 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 05:18:30.928120   10692 out.go:177] * Automatically selected the socket_vmnet network
	I1028 05:18:30.931142   10692 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 05:18:30.931159   10692 cni.go:84] Creating CNI manager for ""
	I1028 05:18:30.931180   10692 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1028 05:18:30.931202   10692 start.go:340] cluster config:
	{Name:old-k8s-version-180000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:18:30.936028   10692 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:18:30.943975   10692 out.go:177] * Starting "old-k8s-version-180000" primary control-plane node in "old-k8s-version-180000" cluster
	I1028 05:18:30.948080   10692 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1028 05:18:30.948096   10692 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1028 05:18:30.948103   10692 cache.go:56] Caching tarball of preloaded images
	I1028 05:18:30.948182   10692 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:18:30.948188   10692 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1028 05:18:30.948244   10692 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/old-k8s-version-180000/config.json ...
	I1028 05:18:30.948254   10692 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/old-k8s-version-180000/config.json: {Name:mk5a8a7d7fb0e53f3c35aa8bc4ed713b503bfeda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:18:30.948489   10692 start.go:360] acquireMachinesLock for old-k8s-version-180000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:18:30.948534   10692 start.go:364] duration metric: took 37.208µs to acquireMachinesLock for "old-k8s-version-180000"
	I1028 05:18:30.948545   10692 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-180000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:18:30.948567   10692 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:18:30.953008   10692 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 05:18:30.967781   10692 start.go:159] libmachine.API.Create for "old-k8s-version-180000" (driver="qemu2")
	I1028 05:18:30.967805   10692 client.go:168] LocalClient.Create starting
	I1028 05:18:30.967885   10692 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:18:30.967922   10692 main.go:141] libmachine: Decoding PEM data...
	I1028 05:18:30.967933   10692 main.go:141] libmachine: Parsing certificate...
	I1028 05:18:30.967967   10692 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:18:30.967995   10692 main.go:141] libmachine: Decoding PEM data...
	I1028 05:18:30.968002   10692 main.go:141] libmachine: Parsing certificate...
	I1028 05:18:30.968327   10692 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:18:31.126189   10692 main.go:141] libmachine: Creating SSH key...
	I1028 05:18:31.204165   10692 main.go:141] libmachine: Creating Disk image...
	I1028 05:18:31.204171   10692 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:18:31.204366   10692 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/disk.qcow2
	I1028 05:18:31.214129   10692 main.go:141] libmachine: STDOUT: 
	I1028 05:18:31.214158   10692 main.go:141] libmachine: STDERR: 
	I1028 05:18:31.214219   10692 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/disk.qcow2 +20000M
	I1028 05:18:31.222866   10692 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:18:31.222881   10692 main.go:141] libmachine: STDERR: 
	I1028 05:18:31.222901   10692 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/disk.qcow2
	I1028 05:18:31.222907   10692 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:18:31.222920   10692 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:18:31.222950   10692 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:d3:f2:1e:4a:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/disk.qcow2
	I1028 05:18:31.224873   10692 main.go:141] libmachine: STDOUT: 
	I1028 05:18:31.224888   10692 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:18:31.224909   10692 client.go:171] duration metric: took 257.103708ms to LocalClient.Create
	I1028 05:18:33.227087   10692 start.go:128] duration metric: took 2.278536583s to createHost
	I1028 05:18:33.227163   10692 start.go:83] releasing machines lock for "old-k8s-version-180000", held for 2.278669542s
	W1028 05:18:33.227246   10692 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:18:33.238370   10692 out.go:177] * Deleting "old-k8s-version-180000" in qemu2 ...
	W1028 05:18:33.269882   10692 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:18:33.269920   10692 start.go:729] Will try again in 5 seconds ...
	I1028 05:18:38.270887   10692 start.go:360] acquireMachinesLock for old-k8s-version-180000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:18:38.271476   10692 start.go:364] duration metric: took 506.625µs to acquireMachinesLock for "old-k8s-version-180000"
	I1028 05:18:38.271608   10692 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-180000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:18:38.271906   10692 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:18:38.281437   10692 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 05:18:38.329160   10692 start.go:159] libmachine.API.Create for "old-k8s-version-180000" (driver="qemu2")
	I1028 05:18:38.329211   10692 client.go:168] LocalClient.Create starting
	I1028 05:18:38.329355   10692 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:18:38.329435   10692 main.go:141] libmachine: Decoding PEM data...
	I1028 05:18:38.329453   10692 main.go:141] libmachine: Parsing certificate...
	I1028 05:18:38.329524   10692 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:18:38.329588   10692 main.go:141] libmachine: Decoding PEM data...
	I1028 05:18:38.329600   10692 main.go:141] libmachine: Parsing certificate...
	I1028 05:18:38.331083   10692 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:18:38.498479   10692 main.go:141] libmachine: Creating SSH key...
	I1028 05:18:38.584054   10692 main.go:141] libmachine: Creating Disk image...
	I1028 05:18:38.584066   10692 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:18:38.584262   10692 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/disk.qcow2
	I1028 05:18:38.594350   10692 main.go:141] libmachine: STDOUT: 
	I1028 05:18:38.594375   10692 main.go:141] libmachine: STDERR: 
	I1028 05:18:38.594440   10692 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/disk.qcow2 +20000M
	I1028 05:18:38.603253   10692 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:18:38.603269   10692 main.go:141] libmachine: STDERR: 
	I1028 05:18:38.603280   10692 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/disk.qcow2
	I1028 05:18:38.603286   10692 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:18:38.603305   10692 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:18:38.603333   10692 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:c2:a2:e1:81:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/disk.qcow2
	I1028 05:18:38.605225   10692 main.go:141] libmachine: STDOUT: 
	I1028 05:18:38.605245   10692 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:18:38.605267   10692 client.go:171] duration metric: took 276.055416ms to LocalClient.Create
	I1028 05:18:40.607425   10692 start.go:128] duration metric: took 2.335538584s to createHost
	I1028 05:18:40.607498   10692 start.go:83] releasing machines lock for "old-k8s-version-180000", held for 2.336050334s
	W1028 05:18:40.607916   10692 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-180000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-180000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:18:40.617536   10692 out.go:201] 
	W1028 05:18:40.623686   10692 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:18:40.623734   10692 out.go:270] * 
	* 
	W1028 05:18:40.626456   10692 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:18:40.635389   10692 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-180000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-180000 -n old-k8s-version-180000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-180000 -n old-k8s-version-180000: exit status 7 (70.51175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-180000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-180000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-180000 create -f testdata/busybox.yaml: exit status 1 (29.462166ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-180000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-180000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-180000 -n old-k8s-version-180000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-180000 -n old-k8s-version-180000: exit status 7 (33.725875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-180000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-180000 -n old-k8s-version-180000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-180000 -n old-k8s-version-180000: exit status 7 (33.709875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-180000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-180000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-180000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-180000 describe deploy/metrics-server -n kube-system: exit status 1 (27.700125ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-180000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-180000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-180000 -n old-k8s-version-180000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-180000 -n old-k8s-version-180000: exit status 7 (34.287625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-180000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-180000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-180000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.203611542s)

                                                
                                                
-- stdout --
	* [old-k8s-version-180000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-180000" primary control-plane node in "old-k8s-version-180000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-180000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-180000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:18:44.633603   10745 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:18:44.633764   10745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:18:44.633767   10745 out.go:358] Setting ErrFile to fd 2...
	I1028 05:18:44.633770   10745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:18:44.633909   10745 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:18:44.635003   10745 out.go:352] Setting JSON to false
	I1028 05:18:44.652525   10745 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6495,"bootTime":1730111429,"procs":519,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:18:44.652600   10745 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:18:44.657457   10745 out.go:177] * [old-k8s-version-180000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:18:44.665407   10745 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:18:44.665466   10745 notify.go:220] Checking for updates...
	I1028 05:18:44.673382   10745 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:18:44.676428   10745 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:18:44.679469   10745 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:18:44.682378   10745 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:18:44.685394   10745 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:18:44.688832   10745 config.go:182] Loaded profile config "old-k8s-version-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1028 05:18:44.692420   10745 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1028 05:18:44.695409   10745 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:18:44.699377   10745 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 05:18:44.706346   10745 start.go:297] selected driver: qemu2
	I1028 05:18:44.706352   10745 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-180000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:18:44.706399   10745 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:18:44.709084   10745 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 05:18:44.709116   10745 cni.go:84] Creating CNI manager for ""
	I1028 05:18:44.709137   10745 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1028 05:18:44.709162   10745 start.go:340] cluster config:
	{Name:old-k8s-version-180000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-180000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:18:44.713741   10745 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:18:44.721415   10745 out.go:177] * Starting "old-k8s-version-180000" primary control-plane node in "old-k8s-version-180000" cluster
	I1028 05:18:44.724492   10745 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1028 05:18:44.724509   10745 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1028 05:18:44.724515   10745 cache.go:56] Caching tarball of preloaded images
	I1028 05:18:44.724594   10745 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:18:44.724607   10745 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1028 05:18:44.724660   10745 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/old-k8s-version-180000/config.json ...
	I1028 05:18:44.725137   10745 start.go:360] acquireMachinesLock for old-k8s-version-180000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:18:44.725167   10745 start.go:364] duration metric: took 23.917µs to acquireMachinesLock for "old-k8s-version-180000"
	I1028 05:18:44.725175   10745 start.go:96] Skipping create...Using existing machine configuration
	I1028 05:18:44.725181   10745 fix.go:54] fixHost starting: 
	I1028 05:18:44.725312   10745 fix.go:112] recreateIfNeeded on old-k8s-version-180000: state=Stopped err=<nil>
	W1028 05:18:44.725320   10745 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 05:18:44.729391   10745 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-180000" ...
	I1028 05:18:44.737351   10745 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:18:44.737389   10745 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:c2:a2:e1:81:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/disk.qcow2
	I1028 05:18:44.739663   10745 main.go:141] libmachine: STDOUT: 
	I1028 05:18:44.739683   10745 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:18:44.739710   10745 fix.go:56] duration metric: took 14.528417ms for fixHost
	I1028 05:18:44.739714   10745 start.go:83] releasing machines lock for "old-k8s-version-180000", held for 14.543042ms
	W1028 05:18:44.739719   10745 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:18:44.739772   10745 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:18:44.739776   10745 start.go:729] Will try again in 5 seconds ...
	I1028 05:18:49.741861   10745 start.go:360] acquireMachinesLock for old-k8s-version-180000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:18:49.742433   10745 start.go:364] duration metric: took 428.416µs to acquireMachinesLock for "old-k8s-version-180000"
	I1028 05:18:49.742648   10745 start.go:96] Skipping create...Using existing machine configuration
	I1028 05:18:49.742670   10745 fix.go:54] fixHost starting: 
	I1028 05:18:49.743569   10745 fix.go:112] recreateIfNeeded on old-k8s-version-180000: state=Stopped err=<nil>
	W1028 05:18:49.743596   10745 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 05:18:49.750054   10745 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-180000" ...
	I1028 05:18:49.753996   10745 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:18:49.754254   10745 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:c2:a2:e1:81:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/old-k8s-version-180000/disk.qcow2
	I1028 05:18:49.764154   10745 main.go:141] libmachine: STDOUT: 
	I1028 05:18:49.764226   10745 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:18:49.764300   10745 fix.go:56] duration metric: took 21.634292ms for fixHost
	I1028 05:18:49.764317   10745 start.go:83] releasing machines lock for "old-k8s-version-180000", held for 21.812042ms
	W1028 05:18:49.764485   10745 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-180000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-180000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:18:49.772977   10745 out.go:201] 
	W1028 05:18:49.777178   10745 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:18:49.777203   10745 out.go:270] * 
	* 
	W1028 05:18:49.779089   10745 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:18:49.786999   10745 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-180000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-180000 -n old-k8s-version-180000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-180000 -n old-k8s-version-180000: exit status 7 (71.89575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-180000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-180000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-180000 -n old-k8s-version-180000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-180000 -n old-k8s-version-180000: exit status 7 (35.439708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-180000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-180000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-180000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-180000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.736084ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-180000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-180000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-180000 -n old-k8s-version-180000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-180000 -n old-k8s-version-180000: exit status 7 (33.472542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-180000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-180000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-180000 -n old-k8s-version-180000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-180000 -n old-k8s-version-180000: exit status 7 (34.259959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-180000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-180000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-180000 --alsologtostderr -v=1: exit status 83 (45.277625ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-180000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-180000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:18:50.087078   10764 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:18:50.088197   10764 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:18:50.088201   10764 out.go:358] Setting ErrFile to fd 2...
	I1028 05:18:50.088204   10764 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:18:50.088358   10764 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:18:50.088569   10764 out.go:352] Setting JSON to false
	I1028 05:18:50.088577   10764 mustload.go:65] Loading cluster: old-k8s-version-180000
	I1028 05:18:50.088797   10764 config.go:182] Loaded profile config "old-k8s-version-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1028 05:18:50.092402   10764 out.go:177] * The control-plane node old-k8s-version-180000 host is not running: state=Stopped
	I1028 05:18:50.096427   10764 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-180000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-180000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-180000 -n old-k8s-version-180000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-180000 -n old-k8s-version-180000: exit status 7 (35.490333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-180000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-180000 -n old-k8s-version-180000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-180000 -n old-k8s-version-180000: exit status 7 (42.387667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-180000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-590000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-590000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (10.102848458s)

                                                
                                                
-- stdout --
	* [no-preload-590000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-590000" primary control-plane node in "no-preload-590000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-590000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:18:50.474528   10783 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:18:50.474695   10783 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:18:50.474698   10783 out.go:358] Setting ErrFile to fd 2...
	I1028 05:18:50.474700   10783 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:18:50.474824   10783 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:18:50.476208   10783 out.go:352] Setting JSON to false
	I1028 05:18:50.495238   10783 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6501,"bootTime":1730111429,"procs":519,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:18:50.495311   10783 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:18:50.499357   10783 out.go:177] * [no-preload-590000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:18:50.504376   10783 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:18:50.504496   10783 notify.go:220] Checking for updates...
	I1028 05:18:50.516418   10783 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:18:50.528335   10783 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:18:50.539336   10783 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:18:50.546305   10783 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:18:50.549397   10783 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:18:50.552692   10783 config.go:182] Loaded profile config "multinode-268000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:18:50.552757   10783 config.go:182] Loaded profile config "stopped-upgrade-451000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1028 05:18:50.552804   10783 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:18:50.557335   10783 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 05:18:50.566396   10783 start.go:297] selected driver: qemu2
	I1028 05:18:50.566408   10783 start.go:901] validating driver "qemu2" against <nil>
	I1028 05:18:50.566417   10783 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:18:50.569214   10783 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 05:18:50.573310   10783 out.go:177] * Automatically selected the socket_vmnet network
	I1028 05:18:50.576463   10783 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 05:18:50.576484   10783 cni.go:84] Creating CNI manager for ""
	I1028 05:18:50.576513   10783 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 05:18:50.576519   10783 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 05:18:50.576552   10783 start.go:340] cluster config:
	{Name:no-preload-590000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-590000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:18:50.581468   10783 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:18:50.590335   10783 out.go:177] * Starting "no-preload-590000" primary control-plane node in "no-preload-590000" cluster
	I1028 05:18:50.597399   10783 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:18:50.597491   10783 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/no-preload-590000/config.json ...
	I1028 05:18:50.597508   10783 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/no-preload-590000/config.json: {Name:mk8e2ca57f93c85426043820529b07e45ac78ffa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:18:50.597503   10783 cache.go:107] acquiring lock: {Name:mk1a90be8c3bab33e5c45d3a8d8f271f19ce1a6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:18:50.597511   10783 cache.go:107] acquiring lock: {Name:mkfc9dc1347960444cbd718a9a2c2e8020573492 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:18:50.597521   10783 cache.go:107] acquiring lock: {Name:mkfbcaa2f7fd47c1e543c84468d3e939f1187ac0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:18:50.597572   10783 cache.go:107] acquiring lock: {Name:mkeb8e9ab7d2dba2ce3678178f006c6b416bedf9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:18:50.597603   10783 cache.go:107] acquiring lock: {Name:mkf78fabb840a20cb0bfada373376a7d49c178bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:18:50.597628   10783 cache.go:115] /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1028 05:18:50.597667   10783 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 155.667µs
	I1028 05:18:50.597690   10783 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1028 05:18:50.597660   10783 cache.go:107] acquiring lock: {Name:mk05b1cabbc2c03fa57898e88d48376a9efc5917 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:18:50.597705   10783 cache.go:107] acquiring lock: {Name:mkaada2d9950777bd757ca02c5bc603233fd4374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:18:50.597710   10783 cache.go:107] acquiring lock: {Name:mkbc00d9eaaa9d9df409c31553e8d99cc8be6e37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:18:50.597780   10783 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 05:18:50.597837   10783 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 05:18:50.597818   10783 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 05:18:50.600422   10783 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1028 05:18:50.600473   10783 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1028 05:18:50.600522   10783 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 05:18:50.600532   10783 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 05:18:50.600653   10783 start.go:360] acquireMachinesLock for no-preload-590000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:18:50.600708   10783 start.go:364] duration metric: took 43.125µs to acquireMachinesLock for "no-preload-590000"
	I1028 05:18:50.600720   10783 start.go:93] Provisioning new machine with config: &{Name:no-preload-590000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:no-preload-590000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:18:50.600770   10783 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:18:50.605336   10783 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 05:18:50.617974   10783 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1028 05:18:50.620297   10783 start.go:159] libmachine.API.Create for "no-preload-590000" (driver="qemu2")
	I1028 05:18:50.620323   10783 client.go:168] LocalClient.Create starting
	I1028 05:18:50.620392   10783 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:18:50.620430   10783 main.go:141] libmachine: Decoding PEM data...
	I1028 05:18:50.620440   10783 main.go:141] libmachine: Parsing certificate...
	I1028 05:18:50.620481   10783 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:18:50.620511   10783 main.go:141] libmachine: Decoding PEM data...
	I1028 05:18:50.620568   10783 main.go:141] libmachine: Parsing certificate...
	I1028 05:18:50.620982   10783 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:18:50.621120   10783 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 05:18:50.621124   10783 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 05:18:50.624569   10783 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1028 05:18:50.624570   10783 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 05:18:50.624630   10783 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 05:18:50.624813   10783 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 05:18:50.979858   10783 main.go:141] libmachine: Creating SSH key...
	I1028 05:18:51.007422   10783 cache.go:162] opening:  /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I1028 05:18:51.042891   10783 cache.go:162] opening:  /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I1028 05:18:51.092881   10783 main.go:141] libmachine: Creating Disk image...
	I1028 05:18:51.092902   10783 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:18:51.093127   10783 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/disk.qcow2
	I1028 05:18:51.103757   10783 main.go:141] libmachine: STDOUT: 
	I1028 05:18:51.103780   10783 main.go:141] libmachine: STDERR: 
	I1028 05:18:51.103837   10783 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/disk.qcow2 +20000M
	I1028 05:18:51.105436   10783 cache.go:162] opening:  /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2
	I1028 05:18:51.113173   10783 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:18:51.113183   10783 main.go:141] libmachine: STDERR: 
	I1028 05:18:51.113195   10783 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/disk.qcow2
	I1028 05:18:51.113199   10783 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:18:51.113211   10783 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:18:51.113240   10783 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:31:09:1d:0c:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/disk.qcow2
	I1028 05:18:51.115555   10783 main.go:141] libmachine: STDOUT: 
	I1028 05:18:51.115584   10783 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:18:51.115602   10783 client.go:171] duration metric: took 495.283ms to LocalClient.Create
	I1028 05:18:51.159478   10783 cache.go:157] /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1028 05:18:51.159504   10783 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 561.852875ms
	I1028 05:18:51.159510   10783 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1028 05:18:51.199044   10783 cache.go:162] opening:  /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1028 05:18:51.202816   10783 cache.go:162] opening:  /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2
	I1028 05:18:51.282281   10783 cache.go:162] opening:  /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2
	I1028 05:18:51.361533   10783 cache.go:162] opening:  /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1028 05:18:53.115765   10783 start.go:128] duration metric: took 2.515032916s to createHost
	I1028 05:18:53.115790   10783 start.go:83] releasing machines lock for "no-preload-590000", held for 2.515132041s
	W1028 05:18:53.115820   10783 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:18:53.128501   10783 out.go:177] * Deleting "no-preload-590000" in qemu2 ...
	W1028 05:18:53.142866   10783 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:18:53.142877   10783 start.go:729] Will try again in 5 seconds ...
	I1028 05:18:53.903569   10783 cache.go:157] /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1028 05:18:53.903607   10783 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 3.306104834s
	I1028 05:18:53.903619   10783 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1028 05:18:54.409955   10783 cache.go:157] /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1028 05:18:54.409995   10783 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 3.812372458s
	I1028 05:18:54.410027   10783 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1028 05:18:54.486400   10783 cache.go:157] /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1028 05:18:54.486449   10783 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 3.8889225s
	I1028 05:18:54.486510   10783 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1028 05:18:55.226240   10783 cache.go:157] /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1028 05:18:55.226311   10783 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 4.628902125s
	I1028 05:18:55.226335   10783 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1028 05:18:56.164740   10783 cache.go:157] /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1028 05:18:56.164770   10783 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 5.567385625s
	I1028 05:18:56.164784   10783 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1028 05:18:58.142954   10783 start.go:360] acquireMachinesLock for no-preload-590000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:18:58.143157   10783 start.go:364] duration metric: took 171.833µs to acquireMachinesLock for "no-preload-590000"
	I1028 05:18:58.143208   10783 start.go:93] Provisioning new machine with config: &{Name:no-preload-590000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:no-preload-590000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:18:58.143297   10783 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:18:58.150630   10783 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 05:18:58.176990   10783 start.go:159] libmachine.API.Create for "no-preload-590000" (driver="qemu2")
	I1028 05:18:58.177035   10783 client.go:168] LocalClient.Create starting
	I1028 05:18:58.177183   10783 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:18:58.177266   10783 main.go:141] libmachine: Decoding PEM data...
	I1028 05:18:58.177285   10783 main.go:141] libmachine: Parsing certificate...
	I1028 05:18:58.177351   10783 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:18:58.177392   10783 main.go:141] libmachine: Decoding PEM data...
	I1028 05:18:58.177414   10783 main.go:141] libmachine: Parsing certificate...
	I1028 05:18:58.177882   10783 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:18:58.337988   10783 main.go:141] libmachine: Creating SSH key...
	I1028 05:18:58.482379   10783 main.go:141] libmachine: Creating Disk image...
	I1028 05:18:58.482387   10783 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:18:58.482573   10783 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/disk.qcow2
	I1028 05:18:58.493202   10783 main.go:141] libmachine: STDOUT: 
	I1028 05:18:58.493220   10783 main.go:141] libmachine: STDERR: 
	I1028 05:18:58.493297   10783 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/disk.qcow2 +20000M
	I1028 05:18:58.502450   10783 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:18:58.502477   10783 main.go:141] libmachine: STDERR: 
	I1028 05:18:58.502494   10783 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/disk.qcow2
	I1028 05:18:58.502501   10783 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:18:58.502510   10783 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:18:58.502551   10783 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:1f:f0:89:c5:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/disk.qcow2
	I1028 05:18:58.504633   10783 main.go:141] libmachine: STDOUT: 
	I1028 05:18:58.504659   10783 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:18:58.504674   10783 client.go:171] duration metric: took 327.640375ms to LocalClient.Create
	I1028 05:19:00.423312   10783 cache.go:157] /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1028 05:19:00.423372   10783 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 9.826004333s
	I1028 05:19:00.423396   10783 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1028 05:19:00.423448   10783 cache.go:87] Successfully saved all images to host disk.
	I1028 05:19:00.505076   10783 start.go:128] duration metric: took 2.361793542s to createHost
	I1028 05:19:00.505139   10783 start.go:83] releasing machines lock for "no-preload-590000", held for 2.362020334s
	W1028 05:19:00.505422   10783 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-590000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-590000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:19:00.515375   10783 out.go:201] 
	W1028 05:19:00.521562   10783 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:19:00.521581   10783 out.go:270] * 
	* 
	W1028 05:19:00.522961   10783 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:19:00.533426   10783 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-590000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-590000 -n no-preload-590000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-590000 -n no-preload-590000: exit status 7 (57.158458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-590000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-590000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-590000 create -f testdata/busybox.yaml: exit status 1 (28.839916ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-590000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-590000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-590000 -n no-preload-590000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-590000 -n no-preload-590000: exit status 7 (33.72425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-590000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-590000 -n no-preload-590000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-590000 -n no-preload-590000: exit status 7 (33.01425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-590000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-590000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-590000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-590000 describe deploy/metrics-server -n kube-system: exit status 1 (28.12025ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-590000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-590000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-590000 -n no-preload-590000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-590000 -n no-preload-590000: exit status 7 (33.88825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-590000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-590000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-590000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.20319725s)

                                                
                                                
-- stdout --
	* [no-preload-590000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-590000" primary control-plane node in "no-preload-590000" cluster
	* Restarting existing qemu2 VM for "no-preload-590000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-590000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:19:04.475546   10863 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:19:04.475704   10863 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:19:04.475707   10863 out.go:358] Setting ErrFile to fd 2...
	I1028 05:19:04.475710   10863 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:19:04.475852   10863 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:19:04.477050   10863 out.go:352] Setting JSON to false
	I1028 05:19:04.495404   10863 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6515,"bootTime":1730111429,"procs":519,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:19:04.495485   10863 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:19:04.499888   10863 out.go:177] * [no-preload-590000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:19:04.506797   10863 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:19:04.506837   10863 notify.go:220] Checking for updates...
	I1028 05:19:04.512732   10863 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:19:04.515773   10863 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:19:04.518796   10863 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:19:04.521803   10863 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:19:04.524761   10863 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:19:04.528082   10863 config.go:182] Loaded profile config "no-preload-590000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:19:04.528338   10863 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:19:04.534811   10863 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 05:19:04.541791   10863 start.go:297] selected driver: qemu2
	I1028 05:19:04.541800   10863 start.go:901] validating driver "qemu2" against &{Name:no-preload-590000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:no-preload-590000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:19:04.541873   10863 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:19:04.544322   10863 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 05:19:04.544350   10863 cni.go:84] Creating CNI manager for ""
	I1028 05:19:04.544368   10863 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 05:19:04.544392   10863 start.go:340] cluster config:
	{Name:no-preload-590000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-590000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:19:04.548693   10863 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:19:04.556793   10863 out.go:177] * Starting "no-preload-590000" primary control-plane node in "no-preload-590000" cluster
	I1028 05:19:04.560652   10863 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:19:04.560737   10863 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/no-preload-590000/config.json ...
	I1028 05:19:04.560776   10863 cache.go:107] acquiring lock: {Name:mk1a90be8c3bab33e5c45d3a8d8f271f19ce1a6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:19:04.560787   10863 cache.go:107] acquiring lock: {Name:mkfc9dc1347960444cbd718a9a2c2e8020573492 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:19:04.560797   10863 cache.go:107] acquiring lock: {Name:mkfbcaa2f7fd47c1e543c84468d3e939f1187ac0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:19:04.560867   10863 cache.go:115] /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1028 05:19:04.560874   10863 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 108.833µs
	I1028 05:19:04.560880   10863 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1028 05:19:04.560880   10863 cache.go:115] /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1028 05:19:04.560887   10863 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 111.875µs
	I1028 05:19:04.560861   10863 cache.go:107] acquiring lock: {Name:mkbc00d9eaaa9d9df409c31553e8d99cc8be6e37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:19:04.560919   10863 cache.go:107] acquiring lock: {Name:mk05b1cabbc2c03fa57898e88d48376a9efc5917 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:19:04.560935   10863 cache.go:107] acquiring lock: {Name:mkf78fabb840a20cb0bfada373376a7d49c178bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:19:04.560951   10863 cache.go:115] /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1028 05:19:04.560957   10863 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 137.541µs
	I1028 05:19:04.560961   10863 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1028 05:19:04.560936   10863 cache.go:107] acquiring lock: {Name:mkeb8e9ab7d2dba2ce3678178f006c6b416bedf9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:19:04.560892   10863 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1028 05:19:04.560897   10863 cache.go:115] /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1028 05:19:04.560981   10863 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 191.167µs
	I1028 05:19:04.560985   10863 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1028 05:19:04.561006   10863 cache.go:115] /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1028 05:19:04.561010   10863 cache.go:115] /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1028 05:19:04.561011   10863 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 147.25µs
	I1028 05:19:04.561017   10863 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1028 05:19:04.560918   10863 cache.go:107] acquiring lock: {Name:mkaada2d9950777bd757ca02c5bc603233fd4374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:19:04.561040   10863 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 121.458µs
	I1028 05:19:04.561044   10863 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1028 05:19:04.561070   10863 cache.go:115] /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1028 05:19:04.561073   10863 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 182.25µs
	I1028 05:19:04.561074   10863 cache.go:115] /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1028 05:19:04.561077   10863 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1028 05:19:04.561079   10863 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 165.334µs
	I1028 05:19:04.561084   10863 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1028 05:19:04.561087   10863 cache.go:87] Successfully saved all images to host disk.
	I1028 05:19:04.561151   10863 start.go:360] acquireMachinesLock for no-preload-590000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:19:04.561180   10863 start.go:364] duration metric: took 23.834µs to acquireMachinesLock for "no-preload-590000"
	I1028 05:19:04.561188   10863 start.go:96] Skipping create...Using existing machine configuration
	I1028 05:19:04.561192   10863 fix.go:54] fixHost starting: 
	I1028 05:19:04.561297   10863 fix.go:112] recreateIfNeeded on no-preload-590000: state=Stopped err=<nil>
	W1028 05:19:04.561305   10863 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 05:19:04.568575   10863 out.go:177] * Restarting existing qemu2 VM for "no-preload-590000" ...
	I1028 05:19:04.572801   10863 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:19:04.572863   10863 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:1f:f0:89:c5:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/disk.qcow2
	I1028 05:19:04.574972   10863 main.go:141] libmachine: STDOUT: 
	I1028 05:19:04.574991   10863 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:19:04.575016   10863 fix.go:56] duration metric: took 13.822583ms for fixHost
	I1028 05:19:04.575020   10863 start.go:83] releasing machines lock for "no-preload-590000", held for 13.836541ms
	W1028 05:19:04.575026   10863 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:19:04.575050   10863 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:19:04.575053   10863 start.go:729] Will try again in 5 seconds ...
	I1028 05:19:09.577283   10863 start.go:360] acquireMachinesLock for no-preload-590000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:19:09.577935   10863 start.go:364] duration metric: took 505.333µs to acquireMachinesLock for "no-preload-590000"
	I1028 05:19:09.578075   10863 start.go:96] Skipping create...Using existing machine configuration
	I1028 05:19:09.578102   10863 fix.go:54] fixHost starting: 
	I1028 05:19:09.579055   10863 fix.go:112] recreateIfNeeded on no-preload-590000: state=Stopped err=<nil>
	W1028 05:19:09.579091   10863 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 05:19:09.593919   10863 out.go:177] * Restarting existing qemu2 VM for "no-preload-590000" ...
	I1028 05:19:09.598718   10863 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:19:09.598954   10863 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:1f:f0:89:c5:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/no-preload-590000/disk.qcow2
	I1028 05:19:09.610103   10863 main.go:141] libmachine: STDOUT: 
	I1028 05:19:09.610172   10863 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:19:09.610267   10863 fix.go:56] duration metric: took 32.168916ms for fixHost
	I1028 05:19:09.610305   10863 start.go:83] releasing machines lock for "no-preload-590000", held for 32.347958ms
	W1028 05:19:09.610491   10863 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-590000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-590000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:19:09.617676   10863 out.go:201] 
	W1028 05:19:09.620792   10863 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:19:09.620824   10863 out.go:270] * 
	* 
	W1028 05:19:09.623145   10863 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:19:09.631723   10863 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-590000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-590000 -n no-preload-590000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-590000 -n no-preload-590000: exit status 7 (71.099834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-590000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-384000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-384000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.955686334s)

                                                
                                                
-- stdout --
	* [embed-certs-384000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-384000" primary control-plane node in "embed-certs-384000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-384000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:19:06.799901   10873 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:19:06.800089   10873 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:19:06.800094   10873 out.go:358] Setting ErrFile to fd 2...
	I1028 05:19:06.800096   10873 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:19:06.800235   10873 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:19:06.801451   10873 out.go:352] Setting JSON to false
	I1028 05:19:06.819238   10873 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6517,"bootTime":1730111429,"procs":518,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:19:06.819310   10873 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:19:06.823922   10873 out.go:177] * [embed-certs-384000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:19:06.832121   10873 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:19:06.832131   10873 notify.go:220] Checking for updates...
	I1028 05:19:06.838015   10873 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:19:06.841051   10873 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:19:06.843956   10873 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:19:06.846997   10873 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:19:06.850062   10873 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:19:06.851774   10873 config.go:182] Loaded profile config "multinode-268000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:19:06.851860   10873 config.go:182] Loaded profile config "no-preload-590000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:19:06.851903   10873 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:19:06.856058   10873 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 05:19:06.862893   10873 start.go:297] selected driver: qemu2
	I1028 05:19:06.862900   10873 start.go:901] validating driver "qemu2" against <nil>
	I1028 05:19:06.862907   10873 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:19:06.865332   10873 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 05:19:06.868012   10873 out.go:177] * Automatically selected the socket_vmnet network
	I1028 05:19:06.871150   10873 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 05:19:06.871171   10873 cni.go:84] Creating CNI manager for ""
	I1028 05:19:06.871194   10873 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 05:19:06.871198   10873 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 05:19:06.871226   10873 start.go:340] cluster config:
	{Name:embed-certs-384000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-384000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:19:06.875888   10873 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:19:06.884026   10873 out.go:177] * Starting "embed-certs-384000" primary control-plane node in "embed-certs-384000" cluster
	I1028 05:19:06.888026   10873 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:19:06.888046   10873 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 05:19:06.888058   10873 cache.go:56] Caching tarball of preloaded images
	I1028 05:19:06.888164   10873 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:19:06.888171   10873 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 05:19:06.888239   10873 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/embed-certs-384000/config.json ...
	I1028 05:19:06.888252   10873 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/embed-certs-384000/config.json: {Name:mkf0c4d321dc51e36af86fcdac34e29068821bd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:19:06.888635   10873 start.go:360] acquireMachinesLock for embed-certs-384000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:19:06.888687   10873 start.go:364] duration metric: took 45.25µs to acquireMachinesLock for "embed-certs-384000"
	I1028 05:19:06.888700   10873 start.go:93] Provisioning new machine with config: &{Name:embed-certs-384000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:embed-certs-384000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:19:06.888731   10873 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:19:06.896073   10873 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 05:19:06.913121   10873 start.go:159] libmachine.API.Create for "embed-certs-384000" (driver="qemu2")
	I1028 05:19:06.913146   10873 client.go:168] LocalClient.Create starting
	I1028 05:19:06.913216   10873 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:19:06.913255   10873 main.go:141] libmachine: Decoding PEM data...
	I1028 05:19:06.913271   10873 main.go:141] libmachine: Parsing certificate...
	I1028 05:19:06.913313   10873 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:19:06.913343   10873 main.go:141] libmachine: Decoding PEM data...
	I1028 05:19:06.913351   10873 main.go:141] libmachine: Parsing certificate...
	I1028 05:19:06.913855   10873 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:19:07.071817   10873 main.go:141] libmachine: Creating SSH key...
	I1028 05:19:07.149672   10873 main.go:141] libmachine: Creating Disk image...
	I1028 05:19:07.149685   10873 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:19:07.149896   10873 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/disk.qcow2
	I1028 05:19:07.160191   10873 main.go:141] libmachine: STDOUT: 
	I1028 05:19:07.160214   10873 main.go:141] libmachine: STDERR: 
	I1028 05:19:07.160274   10873 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/disk.qcow2 +20000M
	I1028 05:19:07.169342   10873 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:19:07.169359   10873 main.go:141] libmachine: STDERR: 
	I1028 05:19:07.169384   10873 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/disk.qcow2
	I1028 05:19:07.169391   10873 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:19:07.169404   10873 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:19:07.169442   10873 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:11:10:4f:79:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/disk.qcow2
	I1028 05:19:07.171422   10873 main.go:141] libmachine: STDOUT: 
	I1028 05:19:07.171435   10873 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:19:07.171456   10873 client.go:171] duration metric: took 258.308417ms to LocalClient.Create
	I1028 05:19:09.173634   10873 start.go:128] duration metric: took 2.284918s to createHost
	I1028 05:19:09.173712   10873 start.go:83] releasing machines lock for "embed-certs-384000", held for 2.285064375s
	W1028 05:19:09.173830   10873 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:19:09.185246   10873 out.go:177] * Deleting "embed-certs-384000" in qemu2 ...
	W1028 05:19:09.217408   10873 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:19:09.217469   10873 start.go:729] Will try again in 5 seconds ...
	I1028 05:19:14.219593   10873 start.go:360] acquireMachinesLock for embed-certs-384000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:19:14.220355   10873 start.go:364] duration metric: took 601.792µs to acquireMachinesLock for "embed-certs-384000"
	I1028 05:19:14.220546   10873 start.go:93] Provisioning new machine with config: &{Name:embed-certs-384000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:embed-certs-384000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:19:14.220843   10873 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:19:14.230488   10873 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 05:19:14.281171   10873 start.go:159] libmachine.API.Create for "embed-certs-384000" (driver="qemu2")
	I1028 05:19:14.281224   10873 client.go:168] LocalClient.Create starting
	I1028 05:19:14.281384   10873 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:19:14.281469   10873 main.go:141] libmachine: Decoding PEM data...
	I1028 05:19:14.281490   10873 main.go:141] libmachine: Parsing certificate...
	I1028 05:19:14.281558   10873 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:19:14.281615   10873 main.go:141] libmachine: Decoding PEM data...
	I1028 05:19:14.281627   10873 main.go:141] libmachine: Parsing certificate...
	I1028 05:19:14.282256   10873 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:19:14.457889   10873 main.go:141] libmachine: Creating SSH key...
	I1028 05:19:14.656382   10873 main.go:141] libmachine: Creating Disk image...
	I1028 05:19:14.656390   10873 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:19:14.656606   10873 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/disk.qcow2
	I1028 05:19:14.666936   10873 main.go:141] libmachine: STDOUT: 
	I1028 05:19:14.667042   10873 main.go:141] libmachine: STDERR: 
	I1028 05:19:14.667101   10873 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/disk.qcow2 +20000M
	I1028 05:19:14.675681   10873 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:19:14.675747   10873 main.go:141] libmachine: STDERR: 
	I1028 05:19:14.675763   10873 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/disk.qcow2
	I1028 05:19:14.675769   10873 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:19:14.675778   10873 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:19:14.675815   10873 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:5e:a5:34:e3:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/disk.qcow2
	I1028 05:19:14.677692   10873 main.go:141] libmachine: STDOUT: 
	I1028 05:19:14.677706   10873 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:19:14.677720   10873 client.go:171] duration metric: took 396.498417ms to LocalClient.Create
	I1028 05:19:16.679846   10873 start.go:128] duration metric: took 2.4589985s to createHost
	I1028 05:19:16.679900   10873 start.go:83] releasing machines lock for "embed-certs-384000", held for 2.459572084s
	W1028 05:19:16.680334   10873 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:19:16.687565   10873 out.go:201] 
	W1028 05:19:16.694954   10873 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:19:16.694993   10873 out.go:270] * 
	* 
	W1028 05:19:16.697654   10873 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:19:16.706936   10873 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-384000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-384000 -n embed-certs-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-384000 -n embed-certs-384000: exit status 7 (70.978667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-384000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-590000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-590000 -n no-preload-590000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-590000 -n no-preload-590000: exit status 7 (34.331958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-590000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-590000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-590000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-590000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.465666ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-590000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-590000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-590000 -n no-preload-590000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-590000 -n no-preload-590000: exit status 7 (33.48775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-590000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-590000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-590000 -n no-preload-590000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-590000 -n no-preload-590000: exit status 7 (33.661875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-590000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-590000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-590000 --alsologtostderr -v=1: exit status 83 (44.953416ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-590000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-590000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:19:09.925734   10896 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:19:09.925933   10896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:19:09.925936   10896 out.go:358] Setting ErrFile to fd 2...
	I1028 05:19:09.925938   10896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:19:09.926066   10896 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:19:09.926281   10896 out.go:352] Setting JSON to false
	I1028 05:19:09.926289   10896 mustload.go:65] Loading cluster: no-preload-590000
	I1028 05:19:09.926525   10896 config.go:182] Loaded profile config "no-preload-590000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:19:09.930250   10896 out.go:177] * The control-plane node no-preload-590000 host is not running: state=Stopped
	I1028 05:19:09.934219   10896 out.go:177]   To start a cluster, run: "minikube start -p no-preload-590000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-590000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-590000 -n no-preload-590000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-590000 -n no-preload-590000: exit status 7 (33.461917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-590000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-590000 -n no-preload-590000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-590000 -n no-preload-590000: exit status 7 (33.848917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-590000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-220000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-220000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.81236675s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-220000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-220000" primary control-plane node in "default-k8s-diff-port-220000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-220000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:19:10.388054   10920 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:19:10.388199   10920 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:19:10.388202   10920 out.go:358] Setting ErrFile to fd 2...
	I1028 05:19:10.388210   10920 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:19:10.388345   10920 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:19:10.389496   10920 out.go:352] Setting JSON to false
	I1028 05:19:10.407134   10920 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6521,"bootTime":1730111429,"procs":518,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:19:10.407201   10920 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:19:10.411193   10920 out.go:177] * [default-k8s-diff-port-220000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:19:10.418259   10920 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:19:10.418304   10920 notify.go:220] Checking for updates...
	I1028 05:19:10.426185   10920 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:19:10.429187   10920 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:19:10.432137   10920 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:19:10.435167   10920 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:19:10.438225   10920 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:19:10.441534   10920 config.go:182] Loaded profile config "embed-certs-384000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:19:10.441599   10920 config.go:182] Loaded profile config "multinode-268000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:19:10.441639   10920 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:19:10.446166   10920 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 05:19:10.452156   10920 start.go:297] selected driver: qemu2
	I1028 05:19:10.452161   10920 start.go:901] validating driver "qemu2" against <nil>
	I1028 05:19:10.452174   10920 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:19:10.454670   10920 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 05:19:10.457141   10920 out.go:177] * Automatically selected the socket_vmnet network
	I1028 05:19:10.460389   10920 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 05:19:10.460410   10920 cni.go:84] Creating CNI manager for ""
	I1028 05:19:10.460434   10920 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 05:19:10.460439   10920 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 05:19:10.460479   10920 start.go:340] cluster config:
	{Name:default-k8s-diff-port-220000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-220000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:19:10.465156   10920 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:19:10.473173   10920 out.go:177] * Starting "default-k8s-diff-port-220000" primary control-plane node in "default-k8s-diff-port-220000" cluster
	I1028 05:19:10.477195   10920 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:19:10.477210   10920 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 05:19:10.477224   10920 cache.go:56] Caching tarball of preloaded images
	I1028 05:19:10.477296   10920 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:19:10.477302   10920 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 05:19:10.477363   10920 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/default-k8s-diff-port-220000/config.json ...
	I1028 05:19:10.477374   10920 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/default-k8s-diff-port-220000/config.json: {Name:mk0fc99764872b53c0b28bebe51bcc28a39091ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:19:10.477744   10920 start.go:360] acquireMachinesLock for default-k8s-diff-port-220000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:19:10.477798   10920 start.go:364] duration metric: took 45.5µs to acquireMachinesLock for "default-k8s-diff-port-220000"
	I1028 05:19:10.477810   10920 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-220000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:19:10.477838   10920 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:19:10.486029   10920 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 05:19:10.503236   10920 start.go:159] libmachine.API.Create for "default-k8s-diff-port-220000" (driver="qemu2")
	I1028 05:19:10.503273   10920 client.go:168] LocalClient.Create starting
	I1028 05:19:10.503339   10920 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:19:10.503404   10920 main.go:141] libmachine: Decoding PEM data...
	I1028 05:19:10.503420   10920 main.go:141] libmachine: Parsing certificate...
	I1028 05:19:10.503456   10920 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:19:10.503491   10920 main.go:141] libmachine: Decoding PEM data...
	I1028 05:19:10.503497   10920 main.go:141] libmachine: Parsing certificate...
	I1028 05:19:10.503968   10920 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:19:10.659913   10920 main.go:141] libmachine: Creating SSH key...
	I1028 05:19:10.725847   10920 main.go:141] libmachine: Creating Disk image...
	I1028 05:19:10.725854   10920 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:19:10.726060   10920 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/disk.qcow2
	I1028 05:19:10.735657   10920 main.go:141] libmachine: STDOUT: 
	I1028 05:19:10.735674   10920 main.go:141] libmachine: STDERR: 
	I1028 05:19:10.735724   10920 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/disk.qcow2 +20000M
	I1028 05:19:10.744094   10920 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:19:10.744117   10920 main.go:141] libmachine: STDERR: 
	I1028 05:19:10.744135   10920 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/disk.qcow2
	I1028 05:19:10.744142   10920 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:19:10.744158   10920 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:19:10.744184   10920 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:eb:06:99:07:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/disk.qcow2
	I1028 05:19:10.745941   10920 main.go:141] libmachine: STDOUT: 
	I1028 05:19:10.745954   10920 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:19:10.745975   10920 client.go:171] duration metric: took 242.703458ms to LocalClient.Create
	I1028 05:19:12.748238   10920 start.go:128] duration metric: took 2.270418542s to createHost
	I1028 05:19:12.748306   10920 start.go:83] releasing machines lock for "default-k8s-diff-port-220000", held for 2.270547666s
	W1028 05:19:12.748362   10920 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:19:12.758659   10920 out.go:177] * Deleting "default-k8s-diff-port-220000" in qemu2 ...
	W1028 05:19:12.792828   10920 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:19:12.792858   10920 start.go:729] Will try again in 5 seconds ...
	I1028 05:19:17.794913   10920 start.go:360] acquireMachinesLock for default-k8s-diff-port-220000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:19:17.795466   10920 start.go:364] duration metric: took 479.708µs to acquireMachinesLock for "default-k8s-diff-port-220000"
	I1028 05:19:17.795626   10920 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-220000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:19:17.795862   10920 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:19:17.805531   10920 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 05:19:17.854930   10920 start.go:159] libmachine.API.Create for "default-k8s-diff-port-220000" (driver="qemu2")
	I1028 05:19:17.854995   10920 client.go:168] LocalClient.Create starting
	I1028 05:19:17.855116   10920 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:19:17.855168   10920 main.go:141] libmachine: Decoding PEM data...
	I1028 05:19:17.855186   10920 main.go:141] libmachine: Parsing certificate...
	I1028 05:19:17.855261   10920 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:19:17.855291   10920 main.go:141] libmachine: Decoding PEM data...
	I1028 05:19:17.855305   10920 main.go:141] libmachine: Parsing certificate...
	I1028 05:19:17.855911   10920 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:19:18.023450   10920 main.go:141] libmachine: Creating SSH key...
	I1028 05:19:18.094495   10920 main.go:141] libmachine: Creating Disk image...
	I1028 05:19:18.094501   10920 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:19:18.094704   10920 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/disk.qcow2
	I1028 05:19:18.104773   10920 main.go:141] libmachine: STDOUT: 
	I1028 05:19:18.104789   10920 main.go:141] libmachine: STDERR: 
	I1028 05:19:18.104850   10920 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/disk.qcow2 +20000M
	I1028 05:19:18.113268   10920 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:19:18.113285   10920 main.go:141] libmachine: STDERR: 
	I1028 05:19:18.113297   10920 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/disk.qcow2
	I1028 05:19:18.113302   10920 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:19:18.113311   10920 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:19:18.113344   10920 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:d9:70:e2:3e:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/disk.qcow2
	I1028 05:19:18.115183   10920 main.go:141] libmachine: STDOUT: 
	I1028 05:19:18.115198   10920 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:19:18.115211   10920 client.go:171] duration metric: took 260.215ms to LocalClient.Create
	I1028 05:19:20.117337   10920 start.go:128] duration metric: took 2.321496875s to createHost
	I1028 05:19:20.117386   10920 start.go:83] releasing machines lock for "default-k8s-diff-port-220000", held for 2.32194375s
	W1028 05:19:20.117705   10920 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-220000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-220000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:19:20.132489   10920 out.go:201] 
	W1028 05:19:20.136500   10920 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:19:20.136527   10920 out.go:270] * 
	* 
	W1028 05:19:20.139223   10920 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:19:20.153482   10920 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-220000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000: exit status 7 (69.776167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-220000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-384000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-384000 create -f testdata/busybox.yaml: exit status 1 (28.6885ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-384000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-384000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-384000 -n embed-certs-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-384000 -n embed-certs-384000: exit status 7 (33.603584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-384000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-384000 -n embed-certs-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-384000 -n embed-certs-384000: exit status 7 (34.09275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-384000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-384000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-384000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-384000 describe deploy/metrics-server -n kube-system: exit status 1 (27.399291ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-384000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-384000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-384000 -n embed-certs-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-384000 -n embed-certs-384000: exit status 7 (33.501792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-384000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-220000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-220000 create -f testdata/busybox.yaml: exit status 1 (28.56225ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-220000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-220000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000: exit status 7 (33.570917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-220000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000: exit status 7 (32.903125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-220000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-220000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-220000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-220000 describe deploy/metrics-server -n kube-system: exit status 1 (27.532416ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-220000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-220000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000: exit status 7 (33.336ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-220000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-384000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-384000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.185685291s)

                                                
                                                
-- stdout --
	* [embed-certs-384000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-384000" primary control-plane node in "embed-certs-384000" cluster
	* Restarting existing qemu2 VM for "embed-certs-384000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-384000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:19:20.679967   10988 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:19:20.680119   10988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:19:20.680122   10988 out.go:358] Setting ErrFile to fd 2...
	I1028 05:19:20.680126   10988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:19:20.680257   10988 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:19:20.681305   10988 out.go:352] Setting JSON to false
	I1028 05:19:20.698829   10988 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6531,"bootTime":1730111429,"procs":518,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:19:20.698905   10988 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:19:20.701304   10988 out.go:177] * [embed-certs-384000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:19:20.708039   10988 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:19:20.708079   10988 notify.go:220] Checking for updates...
	I1028 05:19:20.713943   10988 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:19:20.717006   10988 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:19:20.718300   10988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:19:20.720950   10988 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:19:20.723982   10988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:19:20.729505   10988 config.go:182] Loaded profile config "embed-certs-384000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:19:20.729780   10988 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:19:20.733943   10988 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 05:19:20.740976   10988 start.go:297] selected driver: qemu2
	I1028 05:19:20.740982   10988 start.go:901] validating driver "qemu2" against &{Name:embed-certs-384000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:embed-certs-384000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:19:20.741037   10988 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:19:20.743526   10988 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 05:19:20.743551   10988 cni.go:84] Creating CNI manager for ""
	I1028 05:19:20.743584   10988 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 05:19:20.743606   10988 start.go:340] cluster config:
	{Name:embed-certs-384000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-384000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:19:20.747879   10988 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:19:20.756974   10988 out.go:177] * Starting "embed-certs-384000" primary control-plane node in "embed-certs-384000" cluster
	I1028 05:19:20.760921   10988 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:19:20.760937   10988 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 05:19:20.760949   10988 cache.go:56] Caching tarball of preloaded images
	I1028 05:19:20.761026   10988 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:19:20.761032   10988 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 05:19:20.761088   10988 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/embed-certs-384000/config.json ...
	I1028 05:19:20.761519   10988 start.go:360] acquireMachinesLock for embed-certs-384000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:19:20.761549   10988 start.go:364] duration metric: took 24.25µs to acquireMachinesLock for "embed-certs-384000"
	I1028 05:19:20.761558   10988 start.go:96] Skipping create...Using existing machine configuration
	I1028 05:19:20.761561   10988 fix.go:54] fixHost starting: 
	I1028 05:19:20.761680   10988 fix.go:112] recreateIfNeeded on embed-certs-384000: state=Stopped err=<nil>
	W1028 05:19:20.761688   10988 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 05:19:20.769933   10988 out.go:177] * Restarting existing qemu2 VM for "embed-certs-384000" ...
	I1028 05:19:20.773989   10988 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:19:20.774023   10988 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:5e:a5:34:e3:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/disk.qcow2
	I1028 05:19:20.776213   10988 main.go:141] libmachine: STDOUT: 
	I1028 05:19:20.776231   10988 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:19:20.776257   10988 fix.go:56] duration metric: took 14.6935ms for fixHost
	I1028 05:19:20.776262   10988 start.go:83] releasing machines lock for "embed-certs-384000", held for 14.708625ms
	W1028 05:19:20.776269   10988 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:19:20.776301   10988 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:19:20.776305   10988 start.go:729] Will try again in 5 seconds ...
	I1028 05:19:25.778436   10988 start.go:360] acquireMachinesLock for embed-certs-384000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:19:25.778848   10988 start.go:364] duration metric: took 310µs to acquireMachinesLock for "embed-certs-384000"
	I1028 05:19:25.778962   10988 start.go:96] Skipping create...Using existing machine configuration
	I1028 05:19:25.778981   10988 fix.go:54] fixHost starting: 
	I1028 05:19:25.779740   10988 fix.go:112] recreateIfNeeded on embed-certs-384000: state=Stopped err=<nil>
	W1028 05:19:25.779771   10988 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 05:19:25.787209   10988 out.go:177] * Restarting existing qemu2 VM for "embed-certs-384000" ...
	I1028 05:19:25.791297   10988 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:19:25.791540   10988 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:5e:a5:34:e3:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/embed-certs-384000/disk.qcow2
	I1028 05:19:25.801298   10988 main.go:141] libmachine: STDOUT: 
	I1028 05:19:25.801353   10988 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:19:25.801422   10988 fix.go:56] duration metric: took 22.439875ms for fixHost
	I1028 05:19:25.801442   10988 start.go:83] releasing machines lock for "embed-certs-384000", held for 22.57375ms
	W1028 05:19:25.801634   10988 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-384000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-384000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:19:25.807268   10988 out.go:201] 
	W1028 05:19:25.811358   10988 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:19:25.811382   10988 out.go:270] * 
	* 
	W1028 05:19:25.814242   10988 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:19:25.819831   10988 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-384000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-384000 -n embed-certs-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-384000 -n embed-certs-384000: exit status 7 (69.508916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-384000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-220000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-220000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.194853709s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-220000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-220000" primary control-plane node in "default-k8s-diff-port-220000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-220000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-220000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:19:24.408122   11013 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:19:24.408286   11013 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:19:24.408289   11013 out.go:358] Setting ErrFile to fd 2...
	I1028 05:19:24.408291   11013 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:19:24.408410   11013 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:19:24.409458   11013 out.go:352] Setting JSON to false
	I1028 05:19:24.426968   11013 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6535,"bootTime":1730111429,"procs":518,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:19:24.427034   11013 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:19:24.430978   11013 out.go:177] * [default-k8s-diff-port-220000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:19:24.437959   11013 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:19:24.438002   11013 notify.go:220] Checking for updates...
	I1028 05:19:24.445911   11013 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:19:24.448959   11013 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:19:24.450337   11013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:19:24.452910   11013 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:19:24.455952   11013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:19:24.459271   11013 config.go:182] Loaded profile config "default-k8s-diff-port-220000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:19:24.459576   11013 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:19:24.462902   11013 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 05:19:24.469923   11013 start.go:297] selected driver: qemu2
	I1028 05:19:24.469928   11013 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-220000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:19:24.469976   11013 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:19:24.472409   11013 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 05:19:24.472435   11013 cni.go:84] Creating CNI manager for ""
	I1028 05:19:24.472455   11013 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 05:19:24.472478   11013 start.go:340] cluster config:
	{Name:default-k8s-diff-port-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-220000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:19:24.476707   11013 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:19:24.484915   11013 out.go:177] * Starting "default-k8s-diff-port-220000" primary control-plane node in "default-k8s-diff-port-220000" cluster
	I1028 05:19:24.487916   11013 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:19:24.487937   11013 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 05:19:24.487947   11013 cache.go:56] Caching tarball of preloaded images
	I1028 05:19:24.488002   11013 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:19:24.488008   11013 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 05:19:24.488071   11013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/default-k8s-diff-port-220000/config.json ...
	I1028 05:19:24.488499   11013 start.go:360] acquireMachinesLock for default-k8s-diff-port-220000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:19:24.488528   11013 start.go:364] duration metric: took 23.292µs to acquireMachinesLock for "default-k8s-diff-port-220000"
	I1028 05:19:24.488537   11013 start.go:96] Skipping create...Using existing machine configuration
	I1028 05:19:24.488541   11013 fix.go:54] fixHost starting: 
	I1028 05:19:24.488658   11013 fix.go:112] recreateIfNeeded on default-k8s-diff-port-220000: state=Stopped err=<nil>
	W1028 05:19:24.488665   11013 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 05:19:24.492943   11013 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-220000" ...
	I1028 05:19:24.500939   11013 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:19:24.500979   11013 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:d9:70:e2:3e:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/disk.qcow2
	I1028 05:19:24.503082   11013 main.go:141] libmachine: STDOUT: 
	I1028 05:19:24.503101   11013 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:19:24.503129   11013 fix.go:56] duration metric: took 14.586334ms for fixHost
	I1028 05:19:24.503133   11013 start.go:83] releasing machines lock for "default-k8s-diff-port-220000", held for 14.60025ms
	W1028 05:19:24.503138   11013 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:19:24.503178   11013 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:19:24.503183   11013 start.go:729] Will try again in 5 seconds ...
	I1028 05:19:29.505361   11013 start.go:360] acquireMachinesLock for default-k8s-diff-port-220000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:19:29.505921   11013 start.go:364] duration metric: took 462.666µs to acquireMachinesLock for "default-k8s-diff-port-220000"
	I1028 05:19:29.506043   11013 start.go:96] Skipping create...Using existing machine configuration
	I1028 05:19:29.506066   11013 fix.go:54] fixHost starting: 
	I1028 05:19:29.506834   11013 fix.go:112] recreateIfNeeded on default-k8s-diff-port-220000: state=Stopped err=<nil>
	W1028 05:19:29.506861   11013 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 05:19:29.521572   11013 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-220000" ...
	I1028 05:19:29.526482   11013 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:19:29.526756   11013 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:d9:70:e2:3e:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/default-k8s-diff-port-220000/disk.qcow2
	I1028 05:19:29.537272   11013 main.go:141] libmachine: STDOUT: 
	I1028 05:19:29.537327   11013 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:19:29.537414   11013 fix.go:56] duration metric: took 31.353583ms for fixHost
	I1028 05:19:29.537434   11013 start.go:83] releasing machines lock for "default-k8s-diff-port-220000", held for 31.487125ms
	W1028 05:19:29.537621   11013 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-220000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-220000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:19:29.545398   11013 out.go:201] 
	W1028 05:19:29.548455   11013 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:19:29.548485   11013 out.go:270] * 
	* 
	W1028 05:19:29.551151   11013 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:19:29.560408   11013 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-220000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000: exit status 7 (74.230083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-220000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-384000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-384000 -n embed-certs-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-384000 -n embed-certs-384000: exit status 7 (35.527583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-384000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-384000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-384000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-384000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.348541ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-384000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-384000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-384000 -n embed-certs-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-384000 -n embed-certs-384000: exit status 7 (33.028333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-384000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-384000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-384000 -n embed-certs-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-384000 -n embed-certs-384000: exit status 7 (33.156458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-384000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-384000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-384000 --alsologtostderr -v=1: exit status 83 (44.463ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-384000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-384000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:19:26.109347   11032 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:19:26.109544   11032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:19:26.109547   11032 out.go:358] Setting ErrFile to fd 2...
	I1028 05:19:26.109549   11032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:19:26.109672   11032 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:19:26.109892   11032 out.go:352] Setting JSON to false
	I1028 05:19:26.109900   11032 mustload.go:65] Loading cluster: embed-certs-384000
	I1028 05:19:26.110120   11032 config.go:182] Loaded profile config "embed-certs-384000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:19:26.113512   11032 out.go:177] * The control-plane node embed-certs-384000 host is not running: state=Stopped
	I1028 05:19:26.117517   11032 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-384000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-384000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-384000 -n embed-certs-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-384000 -n embed-certs-384000: exit status 7 (33.404167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-384000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-384000 -n embed-certs-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-384000 -n embed-certs-384000: exit status 7 (33.464834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-384000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-641000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-641000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.924538791s)

                                                
                                                
-- stdout --
	* [newest-cni-641000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-641000" primary control-plane node in "newest-cni-641000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-641000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:19:26.451346   11049 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:19:26.451530   11049 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:19:26.451534   11049 out.go:358] Setting ErrFile to fd 2...
	I1028 05:19:26.451537   11049 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:19:26.451669   11049 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:19:26.452992   11049 out.go:352] Setting JSON to false
	I1028 05:19:26.470968   11049 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6537,"bootTime":1730111429,"procs":518,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:19:26.471037   11049 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:19:26.474544   11049 out.go:177] * [newest-cni-641000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:19:26.480495   11049 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:19:26.480519   11049 notify.go:220] Checking for updates...
	I1028 05:19:26.487564   11049 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:19:26.490461   11049 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:19:26.493511   11049 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:19:26.496511   11049 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:19:26.499403   11049 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:19:26.502839   11049 config.go:182] Loaded profile config "default-k8s-diff-port-220000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:19:26.502904   11049 config.go:182] Loaded profile config "multinode-268000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:19:26.502961   11049 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:19:26.507512   11049 out.go:177] * Using the qemu2 driver based on user configuration
	I1028 05:19:26.514454   11049 start.go:297] selected driver: qemu2
	I1028 05:19:26.514460   11049 start.go:901] validating driver "qemu2" against <nil>
	I1028 05:19:26.514465   11049 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:19:26.516959   11049 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1028 05:19:26.517055   11049 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1028 05:19:26.524409   11049 out.go:177] * Automatically selected the socket_vmnet network
	I1028 05:19:26.527625   11049 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1028 05:19:26.527645   11049 cni.go:84] Creating CNI manager for ""
	I1028 05:19:26.527668   11049 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 05:19:26.527676   11049 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 05:19:26.527718   11049 start.go:340] cluster config:
	{Name:newest-cni-641000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-641000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:19:26.532438   11049 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:19:26.540454   11049 out.go:177] * Starting "newest-cni-641000" primary control-plane node in "newest-cni-641000" cluster
	I1028 05:19:26.544506   11049 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:19:26.544524   11049 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 05:19:26.544533   11049 cache.go:56] Caching tarball of preloaded images
	I1028 05:19:26.544616   11049 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:19:26.544622   11049 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 05:19:26.544689   11049 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/newest-cni-641000/config.json ...
	I1028 05:19:26.544701   11049 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/newest-cni-641000/config.json: {Name:mk0645a557011ae0a60204469e4049995e74334a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 05:19:26.545088   11049 start.go:360] acquireMachinesLock for newest-cni-641000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:19:26.545135   11049 start.go:364] duration metric: took 42.25µs to acquireMachinesLock for "newest-cni-641000"
	I1028 05:19:26.545148   11049 start.go:93] Provisioning new machine with config: &{Name:newest-cni-641000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:newest-cni-641000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:19:26.545183   11049 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:19:26.548512   11049 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 05:19:26.565216   11049 start.go:159] libmachine.API.Create for "newest-cni-641000" (driver="qemu2")
	I1028 05:19:26.565238   11049 client.go:168] LocalClient.Create starting
	I1028 05:19:26.565310   11049 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:19:26.565345   11049 main.go:141] libmachine: Decoding PEM data...
	I1028 05:19:26.565357   11049 main.go:141] libmachine: Parsing certificate...
	I1028 05:19:26.565397   11049 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:19:26.565425   11049 main.go:141] libmachine: Decoding PEM data...
	I1028 05:19:26.565432   11049 main.go:141] libmachine: Parsing certificate...
	I1028 05:19:26.565885   11049 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:19:26.725002   11049 main.go:141] libmachine: Creating SSH key...
	I1028 05:19:26.808635   11049 main.go:141] libmachine: Creating Disk image...
	I1028 05:19:26.808641   11049 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:19:26.808825   11049 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/disk.qcow2
	I1028 05:19:26.818892   11049 main.go:141] libmachine: STDOUT: 
	I1028 05:19:26.818928   11049 main.go:141] libmachine: STDERR: 
	I1028 05:19:26.818982   11049 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/disk.qcow2 +20000M
	I1028 05:19:26.827469   11049 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:19:26.827491   11049 main.go:141] libmachine: STDERR: 
	I1028 05:19:26.827505   11049 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/disk.qcow2
	I1028 05:19:26.827510   11049 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:19:26.827522   11049 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:19:26.827550   11049 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:77:8a:ff:4c:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/disk.qcow2
	I1028 05:19:26.829359   11049 main.go:141] libmachine: STDOUT: 
	I1028 05:19:26.829372   11049 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:19:26.829391   11049 client.go:171] duration metric: took 264.153417ms to LocalClient.Create
	I1028 05:19:28.831527   11049 start.go:128] duration metric: took 2.28637125s to createHost
	I1028 05:19:28.831584   11049 start.go:83] releasing machines lock for "newest-cni-641000", held for 2.286489083s
	W1028 05:19:28.831665   11049 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:19:28.846950   11049 out.go:177] * Deleting "newest-cni-641000" in qemu2 ...
	W1028 05:19:28.873599   11049 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:19:28.873626   11049 start.go:729] Will try again in 5 seconds ...
	I1028 05:19:33.874101   11049 start.go:360] acquireMachinesLock for newest-cni-641000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:19:33.874897   11049 start.go:364] duration metric: took 617.291µs to acquireMachinesLock for "newest-cni-641000"
	I1028 05:19:33.875093   11049 start.go:93] Provisioning new machine with config: &{Name:newest-cni-641000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:newest-cni-641000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 05:19:33.875638   11049 start.go:125] createHost starting for "" (driver="qemu2")
	I1028 05:19:33.880354   11049 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 05:19:33.929441   11049 start.go:159] libmachine.API.Create for "newest-cni-641000" (driver="qemu2")
	I1028 05:19:33.929493   11049 client.go:168] LocalClient.Create starting
	I1028 05:19:33.929632   11049 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/ca.pem
	I1028 05:19:33.929715   11049 main.go:141] libmachine: Decoding PEM data...
	I1028 05:19:33.929738   11049 main.go:141] libmachine: Parsing certificate...
	I1028 05:19:33.929818   11049 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19875-6942/.minikube/certs/cert.pem
	I1028 05:19:33.929874   11049 main.go:141] libmachine: Decoding PEM data...
	I1028 05:19:33.929888   11049 main.go:141] libmachine: Parsing certificate...
	I1028 05:19:33.930758   11049 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1028 05:19:34.099457   11049 main.go:141] libmachine: Creating SSH key...
	I1028 05:19:34.274607   11049 main.go:141] libmachine: Creating Disk image...
	I1028 05:19:34.274613   11049 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1028 05:19:34.274839   11049 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/disk.qcow2.raw /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/disk.qcow2
	I1028 05:19:34.285368   11049 main.go:141] libmachine: STDOUT: 
	I1028 05:19:34.285384   11049 main.go:141] libmachine: STDERR: 
	I1028 05:19:34.285442   11049 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/disk.qcow2 +20000M
	I1028 05:19:34.293897   11049 main.go:141] libmachine: STDOUT: Image resized.
	
	I1028 05:19:34.293914   11049 main.go:141] libmachine: STDERR: 
	I1028 05:19:34.293926   11049 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/disk.qcow2
	I1028 05:19:34.293932   11049 main.go:141] libmachine: Starting QEMU VM...
	I1028 05:19:34.293941   11049 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:19:34.293976   11049 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:66:94:3f:f6:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/disk.qcow2
	I1028 05:19:34.295799   11049 main.go:141] libmachine: STDOUT: 
	I1028 05:19:34.295814   11049 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:19:34.295830   11049 client.go:171] duration metric: took 366.339834ms to LocalClient.Create
	I1028 05:19:36.298012   11049 start.go:128] duration metric: took 2.422383292s to createHost
	I1028 05:19:36.298090   11049 start.go:83] releasing machines lock for "newest-cni-641000", held for 2.423208125s
	W1028 05:19:36.298624   11049 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-641000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-641000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:19:36.312407   11049 out.go:201] 
	W1028 05:19:36.315470   11049 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:19:36.315543   11049 out.go:270] * 
	* 
	W1028 05:19:36.318938   11049 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:19:36.331389   11049 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-641000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-641000 -n newest-cni-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-641000 -n newest-cni-641000: exit status 7 (74.399459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-641000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-220000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000: exit status 7 (35.391458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-220000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-220000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-220000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-220000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.298ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-220000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-220000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000: exit status 7 (32.748791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-220000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-220000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000: exit status 7 (32.835041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-220000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-220000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-220000 --alsologtostderr -v=1: exit status 83 (45.598542ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-220000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-220000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:19:29.850873   11071 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:19:29.851074   11071 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:19:29.851077   11071 out.go:358] Setting ErrFile to fd 2...
	I1028 05:19:29.851080   11071 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:19:29.851223   11071 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:19:29.851459   11071 out.go:352] Setting JSON to false
	I1028 05:19:29.851466   11071 mustload.go:65] Loading cluster: default-k8s-diff-port-220000
	I1028 05:19:29.851690   11071 config.go:182] Loaded profile config "default-k8s-diff-port-220000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:19:29.856088   11071 out.go:177] * The control-plane node default-k8s-diff-port-220000 host is not running: state=Stopped
	I1028 05:19:29.860016   11071 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-220000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-220000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000: exit status 7 (32.604791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-220000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000: exit status 7 (33.093875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-220000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-641000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-641000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.195640333s)

                                                
                                                
-- stdout --
	* [newest-cni-641000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-641000" primary control-plane node in "newest-cni-641000" cluster
	* Restarting existing qemu2 VM for "newest-cni-641000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-641000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:19:38.572007   11114 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:19:38.572153   11114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:19:38.572156   11114 out.go:358] Setting ErrFile to fd 2...
	I1028 05:19:38.572158   11114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:19:38.572284   11114 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:19:38.573367   11114 out.go:352] Setting JSON to false
	I1028 05:19:38.591349   11114 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6549,"bootTime":1730111429,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 05:19:38.591421   11114 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 05:19:38.595065   11114 out.go:177] * [newest-cni-641000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 05:19:38.602092   11114 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 05:19:38.602159   11114 notify.go:220] Checking for updates...
	I1028 05:19:38.610051   11114 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 05:19:38.613092   11114 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 05:19:38.616017   11114 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 05:19:38.619022   11114 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 05:19:38.621997   11114 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 05:19:38.625250   11114 config.go:182] Loaded profile config "newest-cni-641000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:19:38.625523   11114 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 05:19:38.630030   11114 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 05:19:38.635985   11114 start.go:297] selected driver: qemu2
	I1028 05:19:38.635991   11114 start.go:901] validating driver "qemu2" against &{Name:newest-cni-641000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:newest-cni-641000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:19:38.636058   11114 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 05:19:38.638642   11114 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1028 05:19:38.638667   11114 cni.go:84] Creating CNI manager for ""
	I1028 05:19:38.638691   11114 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 05:19:38.638721   11114 start.go:340] cluster config:
	{Name:newest-cni-641000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-641000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 05:19:38.643314   11114 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 05:19:38.651012   11114 out.go:177] * Starting "newest-cni-641000" primary control-plane node in "newest-cni-641000" cluster
	I1028 05:19:38.654123   11114 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 05:19:38.654138   11114 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 05:19:38.654147   11114 cache.go:56] Caching tarball of preloaded images
	I1028 05:19:38.654229   11114 preload.go:172] Found /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 05:19:38.654234   11114 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 05:19:38.654300   11114 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/newest-cni-641000/config.json ...
	I1028 05:19:38.654765   11114 start.go:360] acquireMachinesLock for newest-cni-641000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:19:38.654793   11114 start.go:364] duration metric: took 22.875µs to acquireMachinesLock for "newest-cni-641000"
	I1028 05:19:38.654801   11114 start.go:96] Skipping create...Using existing machine configuration
	I1028 05:19:38.654806   11114 fix.go:54] fixHost starting: 
	I1028 05:19:38.654923   11114 fix.go:112] recreateIfNeeded on newest-cni-641000: state=Stopped err=<nil>
	W1028 05:19:38.654931   11114 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 05:19:38.657994   11114 out.go:177] * Restarting existing qemu2 VM for "newest-cni-641000" ...
	I1028 05:19:38.666028   11114 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:19:38.666064   11114 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:66:94:3f:f6:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/disk.qcow2
	I1028 05:19:38.668230   11114 main.go:141] libmachine: STDOUT: 
	I1028 05:19:38.668250   11114 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:19:38.668279   11114 fix.go:56] duration metric: took 13.472459ms for fixHost
	I1028 05:19:38.668283   11114 start.go:83] releasing machines lock for "newest-cni-641000", held for 13.485958ms
	W1028 05:19:38.668289   11114 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:19:38.668349   11114 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:19:38.668354   11114 start.go:729] Will try again in 5 seconds ...
	I1028 05:19:43.670389   11114 start.go:360] acquireMachinesLock for newest-cni-641000: {Name:mk58ce3537fb1c5394cb25fe957d1dc7c7ef2b80 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 05:19:43.670921   11114 start.go:364] duration metric: took 366.167µs to acquireMachinesLock for "newest-cni-641000"
	I1028 05:19:43.671064   11114 start.go:96] Skipping create...Using existing machine configuration
	I1028 05:19:43.671083   11114 fix.go:54] fixHost starting: 
	I1028 05:19:43.671742   11114 fix.go:112] recreateIfNeeded on newest-cni-641000: state=Stopped err=<nil>
	W1028 05:19:43.671769   11114 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 05:19:43.681232   11114 out.go:177] * Restarting existing qemu2 VM for "newest-cni-641000" ...
	I1028 05:19:43.685296   11114 qemu.go:418] Using hvf for hardware acceleration
	I1028 05:19:43.685522   11114 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:66:94:3f:f6:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19875-6942/.minikube/machines/newest-cni-641000/disk.qcow2
	I1028 05:19:43.695051   11114 main.go:141] libmachine: STDOUT: 
	I1028 05:19:43.695106   11114 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1028 05:19:43.695171   11114 fix.go:56] duration metric: took 24.085375ms for fixHost
	I1028 05:19:43.695187   11114 start.go:83] releasing machines lock for "newest-cni-641000", held for 24.240916ms
	W1028 05:19:43.695359   11114 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-641000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-641000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1028 05:19:43.702325   11114 out.go:201] 
	W1028 05:19:43.706189   11114 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1028 05:19:43.706214   11114 out.go:270] * 
	* 
	W1028 05:19:43.709078   11114 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 05:19:43.720260   11114 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-641000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-641000 -n newest-cni-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-641000 -n newest-cni-641000: exit status 7 (74.531292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-641000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-641000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-641000 -n newest-cni-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-641000 -n newest-cni-641000: exit status 7 (35.970208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-641000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-641000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-641000 --alsologtostderr -v=1: exit status 83 (48.132583ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-641000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-641000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 05:19:43.926106   11128 out.go:345] Setting OutFile to fd 1 ...
	I1028 05:19:43.926313   11128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:19:43.926316   11128 out.go:358] Setting ErrFile to fd 2...
	I1028 05:19:43.926319   11128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 05:19:43.926458   11128 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 05:19:43.926686   11128 out.go:352] Setting JSON to false
	I1028 05:19:43.926694   11128 mustload.go:65] Loading cluster: newest-cni-641000
	I1028 05:19:43.926936   11128 config.go:182] Loaded profile config "newest-cni-641000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 05:19:43.931597   11128 out.go:177] * The control-plane node newest-cni-641000 host is not running: state=Stopped
	I1028 05:19:43.935563   11128 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-641000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-641000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-641000 -n newest-cni-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-641000 -n newest-cni-641000: exit status 7 (34.690875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-641000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-641000 -n newest-cni-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-641000 -n newest-cni-641000: exit status 7 (34.279167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-641000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.12s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.2/json-events 7.94
13 TestDownloadOnly/v1.31.2/preload-exists 0
16 TestDownloadOnly/v1.31.2/kubectl 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.08
18 TestDownloadOnly/v1.31.2/DeleteAll 0.11
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.3
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 11.15
39 TestErrorSpam/start 0.42
40 TestErrorSpam/status 0.11
41 TestErrorSpam/pause 0.14
42 TestErrorSpam/unpause 0.13
43 TestErrorSpam/stop 8.46
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.92
55 TestFunctional/serial/CacheCmd/cache/add_local 1.14
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.04
60 TestFunctional/serial/CacheCmd/cache/delete 0.08
69 TestFunctional/parallel/ConfigCmd 0.25
71 TestFunctional/parallel/DryRun 0.29
72 TestFunctional/parallel/InternationalLanguage 0.12
78 TestFunctional/parallel/AddonsCmd 0.1
93 TestFunctional/parallel/License 0.27
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 1.62
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.08
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.08
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.1
126 TestFunctional/parallel/ProfileCmd/profile_list 0.09
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.09
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.05
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.17
135 TestFunctional/delete_echo-server_images 0.07
136 TestFunctional/delete_my-image_image 0.02
137 TestFunctional/delete_minikube_cached_images 0.02
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 3.37
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.21
193 TestMainNoArgs 0.04
240 TestStoppedBinaryUpgrade/Setup 1.03
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
257 TestNoKubernetes/serial/ProfileList 31.38
258 TestNoKubernetes/serial/Stop 3.31
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
272 TestStoppedBinaryUpgrade/MinikubeLogs 0.69
275 TestStartStop/group/old-k8s-version/serial/Stop 3.54
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
286 TestStartStop/group/no-preload/serial/Stop 3.5
287 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
299 TestStartStop/group/embed-certs/serial/Stop 3.53
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.79
303 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.1
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.07
319 TestStartStop/group/newest-cni/serial/Stop 1.91
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.14
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1028 04:54:14.634907    7452 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1028 04:54:14.635271    7452 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-131000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-131000: exit status 85 (103.118958ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-131000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |          |
	|         | -p download-only-131000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 04:54:00
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 04:54:00.740121    7453 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:54:00.740288    7453 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:54:00.740291    7453 out.go:358] Setting ErrFile to fd 2...
	I1028 04:54:00.740294    7453 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:54:00.740421    7453 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	W1028 04:54:00.740522    7453 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19875-6942/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19875-6942/.minikube/config/config.json: no such file or directory
	I1028 04:54:00.741913    7453 out.go:352] Setting JSON to true
	I1028 04:54:00.759862    7453 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5011,"bootTime":1730111429,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:54:00.759970    7453 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:54:00.764936    7453 out.go:97] [download-only-131000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:54:00.765056    7453 notify.go:220] Checking for updates...
	W1028 04:54:00.765147    7453 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball: no such file or directory
	I1028 04:54:00.768740    7453 out.go:169] MINIKUBE_LOCATION=19875
	I1028 04:54:00.771801    7453 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 04:54:00.775828    7453 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:54:00.778768    7453 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:54:00.781836    7453 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	W1028 04:54:00.787755    7453 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1028 04:54:00.788030    7453 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:54:00.790706    7453 out.go:97] Using the qemu2 driver based on user configuration
	I1028 04:54:00.790728    7453 start.go:297] selected driver: qemu2
	I1028 04:54:00.790742    7453 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:54:00.790803    7453 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 04:54:00.793816    7453 out.go:169] Automatically selected the socket_vmnet network
	I1028 04:54:00.799318    7453 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1028 04:54:00.799417    7453 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 04:54:00.799464    7453 cni.go:84] Creating CNI manager for ""
	I1028 04:54:00.799521    7453 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1028 04:54:00.799576    7453 start.go:340] cluster config:
	{Name:download-only-131000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-131000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:54:00.804459    7453 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:54:00.807909    7453 out.go:97] Downloading VM boot image ...
	I1028 04:54:00.807928    7453 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso
	I1028 04:54:06.476339    7453 out.go:97] Starting "download-only-131000" primary control-plane node in "download-only-131000" cluster
	I1028 04:54:06.476377    7453 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1028 04:54:06.534173    7453 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1028 04:54:06.534196    7453 cache.go:56] Caching tarball of preloaded images
	I1028 04:54:06.534388    7453 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1028 04:54:06.538523    7453 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1028 04:54:06.538530    7453 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1028 04:54:06.619060    7453 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1028 04:54:13.245145    7453 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1028 04:54:13.245340    7453 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1028 04:54:13.940270    7453 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1028 04:54:13.940488    7453 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/download-only-131000/config.json ...
	I1028 04:54:13.940508    7453 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19875-6942/.minikube/profiles/download-only-131000/config.json: {Name:mk4445da67b8d452a34b26b9974bcb6d4ac2b382 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 04:54:13.940793    7453 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1028 04:54:13.941037    7453 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1028 04:54:14.584862    7453 out.go:193] 
	W1028 04:54:14.588408    7453 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19875-6942/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x105e1d320 0x105e1d320 0x105e1d320 0x105e1d320 0x105e1d320 0x105e1d320 0x105e1d320] Decompressors:map[bz2:0x1400012de70 gz:0x1400012de78 tar:0x1400012de20 tar.bz2:0x1400012de30 tar.gz:0x1400012de40 tar.xz:0x1400012de50 tar.zst:0x1400012de60 tbz2:0x1400012de30 tgz:0x1400012de40 txz:0x1400012de50 tzst:0x1400012de60 xz:0x1400012de80 zip:0x1400012de90 zst:0x1400012de88] Getters:map[file:0x140004326d0 http:0x14000a47180 https:0x14000a471d0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1028 04:54:14.588436    7453 out_reason.go:110] 
	W1028 04:54:14.596917    7453 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 04:54:14.600884    7453 out.go:193] 
	
	
	* The control-plane node download-only-131000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-131000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-131000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (7.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-803000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-803000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=qemu2 : (7.942137208s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (7.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1028 04:54:22.955285    7452 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1028 04:54:22.955364    7452 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
--- PASS: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-803000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-803000: exit status 85 (83.287541ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-131000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
	|         | -p download-only-131000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
	| delete  | -p download-only-131000        | download-only-131000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT | 28 Oct 24 04:54 PDT |
	| start   | -o=json --download-only        | download-only-803000 | jenkins | v1.34.0 | 28 Oct 24 04:54 PDT |                     |
	|         | -p download-only-803000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 04:54:15
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 04:54:15.044642    7480 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:54:15.044811    7480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:54:15.044815    7480 out.go:358] Setting ErrFile to fd 2...
	I1028 04:54:15.044817    7480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:54:15.044938    7480 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 04:54:15.046058    7480 out.go:352] Setting JSON to true
	I1028 04:54:15.063593    7480 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5026,"bootTime":1730111429,"procs":528,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:54:15.063667    7480 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:54:15.067865    7480 out.go:97] [download-only-803000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:54:15.067975    7480 notify.go:220] Checking for updates...
	I1028 04:54:15.071867    7480 out.go:169] MINIKUBE_LOCATION=19875
	I1028 04:54:15.074771    7480 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 04:54:15.078834    7480 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:54:15.081939    7480 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:54:15.084826    7480 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	W1028 04:54:15.090830    7480 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1028 04:54:15.091011    7480 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:54:15.092368    7480 out.go:97] Using the qemu2 driver based on user configuration
	I1028 04:54:15.092375    7480 start.go:297] selected driver: qemu2
	I1028 04:54:15.092379    7480 start.go:901] validating driver "qemu2" against <nil>
	I1028 04:54:15.092416    7480 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 04:54:15.095824    7480 out.go:169] Automatically selected the socket_vmnet network
	I1028 04:54:15.101259    7480 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1028 04:54:15.101339    7480 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 04:54:15.101361    7480 cni.go:84] Creating CNI manager for ""
	I1028 04:54:15.101384    7480 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 04:54:15.101389    7480 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 04:54:15.101437    7480 start.go:340] cluster config:
	{Name:download-only-803000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-803000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:54:15.105745    7480 iso.go:125] acquiring lock: {Name:mk4ecc443556c069f8dee9d8fee56889fa301837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 04:54:15.108953    7480 out.go:97] Starting "download-only-803000" primary control-plane node in "download-only-803000" cluster
	I1028 04:54:15.108962    7480 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:54:15.163632    7480 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1028 04:54:15.163656    7480 cache.go:56] Caching tarball of preloaded images
	I1028 04:54:15.163871    7480 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 04:54:15.168054    7480 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1028 04:54:15.168062    7480 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 ...
	I1028 04:54:15.248146    7480 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4?checksum=md5:5f3d7369b12f47138aa2863bb7bda916 -> /Users/jenkins/minikube-integration/19875-6942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-803000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-803000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-803000
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.3s)

                                                
                                                
=== RUN   TestBinaryMirror
I1028 04:54:23.487755    7452 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-237000 --alsologtostderr --binary-mirror http://127.0.0.1:57821 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-237000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-237000
--- PASS: TestBinaryMirror (0.30s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-578000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-578000: exit status 85 (57.674333ms)

                                                
                                                
-- stdout --
	* Profile "addons-578000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-578000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-578000
addons_test.go:950: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-578000: exit status 85 (57.656ms)

                                                
                                                
-- stdout --
	* Profile "addons-578000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-578000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.15s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1028 05:05:06.228747    7452 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 05:05:06.228965    7452 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W1028 05:05:08.168665    7452 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1028 05:05:08.168934    7452 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1028 05:05:08.168985    7452 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1881623798/001/docker-machine-driver-hyperkit
I1028 05:05:08.691260    7452 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1881623798/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x10542a6e0 0x10542a6e0 0x10542a6e0 0x10542a6e0 0x10542a6e0 0x10542a6e0 0x10542a6e0] Decompressors:map[bz2:0x140006a7cc0 gz:0x140006a7cc8 tar:0x140006a7c30 tar.bz2:0x140006a7c50 tar.gz:0x140006a7c60 tar.xz:0x140006a7c90 tar.zst:0x140006a7ca0 tbz2:0x140006a7c50 tgz:0x140006a7c60 txz:0x140006a7c90 tzst:0x140006a7ca0 xz:0x140006a7ce0 zip:0x140006a7cf0 zst:0x140006a7ce8] Getters:map[file:0x14000c85a40 http:0x14000c97130 https:0x14000c97180] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1028 05:05:08.691376    7452 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1881623798/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (11.15s)

                                                
                                    
x
+
TestErrorSpam/start (0.42s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 start --dry-run
--- PASS: TestErrorSpam/start (0.42s)

                                                
                                    
x
+
TestErrorSpam/status (0.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 status: exit status 7 (37.068917ms)

                                                
                                                
-- stdout --
	nospam-220000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 status: exit status 7 (34.3605ms)

                                                
                                                
-- stdout --
	nospam-220000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 status: exit status 7 (33.908417ms)

                                                
                                                
-- stdout --
	nospam-220000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.11s)

                                                
                                    
x
+
TestErrorSpam/pause (0.14s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 pause: exit status 83 (44.825ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-220000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-220000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 pause: exit status 83 (45.887666ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-220000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-220000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 pause: exit status 83 (45.72325ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-220000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-220000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.14s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 unpause: exit status 83 (42.774167ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-220000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-220000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 unpause: exit status 83 (43.959333ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-220000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-220000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 unpause: exit status 83 (43.628416ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-220000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-220000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.13s)

                                                
                                    
x
+
TestErrorSpam/stop (8.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 stop: (1.974551875s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 stop: (3.230557875s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-220000 stop: (3.256358542s)
--- PASS: TestErrorSpam/stop (8.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19875-6942/.minikube/files/etc/test/nested/copy/7452/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-238000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3361771353/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 cache add minikube-local-cache-test:functional-238000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 cache delete minikube-local-cache-test:functional-238000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-238000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 config get cpus: exit status 14 (36.178125ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 config get cpus: exit status 14 (39.29875ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-238000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-238000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (168.254042ms)

                                                
                                                
-- stdout --
	* [functional-238000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:56:06.213057    8040 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:56:06.213246    8040 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:56:06.213250    8040 out.go:358] Setting ErrFile to fd 2...
	I1028 04:56:06.213254    8040 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:56:06.213435    8040 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 04:56:06.214756    8040 out.go:352] Setting JSON to false
	I1028 04:56:06.234458    8040 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5137,"bootTime":1730111429,"procs":526,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:56:06.234525    8040 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:56:06.238742    8040 out.go:177] * [functional-238000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1028 04:56:06.246636    8040 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 04:56:06.246685    8040 notify.go:220] Checking for updates...
	I1028 04:56:06.254675    8040 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 04:56:06.257612    8040 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:56:06.260713    8040 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:56:06.263686    8040 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 04:56:06.266621    8040 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:56:06.269944    8040 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:56:06.270217    8040 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:56:06.274724    8040 out.go:177] * Using the qemu2 driver based on existing profile
	I1028 04:56:06.281654    8040 start.go:297] selected driver: qemu2
	I1028 04:56:06.281659    8040 start.go:901] validating driver "qemu2" against &{Name:functional-238000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-238000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:56:06.281709    8040 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:56:06.288538    8040 out.go:201] 
	W1028 04:56:06.292704    8040 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1028 04:56:06.296698    8040 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-238000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-238000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-238000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (118.897042ms)

                                                
                                                
-- stdout --
	* [functional-238000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 04:56:06.451175    8051 out.go:345] Setting OutFile to fd 1 ...
	I1028 04:56:06.451319    8051 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:56:06.451322    8051 out.go:358] Setting ErrFile to fd 2...
	I1028 04:56:06.451324    8051 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 04:56:06.451445    8051 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19875-6942/.minikube/bin
	I1028 04:56:06.452903    8051 out.go:352] Setting JSON to false
	I1028 04:56:06.471254    8051 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5137,"bootTime":1730111429,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1028 04:56:06.471338    8051 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 04:56:06.475785    8051 out.go:177] * [functional-238000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	I1028 04:56:06.482700    8051 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 04:56:06.482797    8051 notify.go:220] Checking for updates...
	I1028 04:56:06.490703    8051 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	I1028 04:56:06.493684    8051 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1028 04:56:06.496700    8051 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 04:56:06.499644    8051 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	I1028 04:56:06.502723    8051 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 04:56:06.505970    8051 config.go:182] Loaded profile config "functional-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 04:56:06.506238    8051 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 04:56:06.510664    8051 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1028 04:56:06.517701    8051 start.go:297] selected driver: qemu2
	I1028 04:56:06.517707    8051 start.go:901] validating driver "qemu2" against &{Name:functional-238000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-238000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 04:56:06.517758    8051 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 04:56:06.524668    8051 out.go:201] 
	W1028 04:56:06.528683    8051 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1028 04:56:06.532659    8051 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.591295291s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-238000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-238000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 image rm kicbase/echo-server:functional-238000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-238000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 image save --daemon kicbase/echo-server:functional-238000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-238000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "52.868375ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "36.687417ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "50.981166ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "37.847ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.012425542s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-238000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-238000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-238000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-238000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.37s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-341000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-341000 --output=json --user=testUser: (3.370205542s)
--- PASS: TestJSONOutput/stop/Command (3.37s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-012000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-012000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (99.962875ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ad11b4e5-0167-49c8-a41b-1a5614a1e7bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-012000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a51d694a-ba2d-4530-8274-7c008d3d5c5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19875"}}
	{"specversion":"1.0","id":"8c302541-cb68-4d95-a3be-30ea6be2a140","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig"}}
	{"specversion":"1.0","id":"0c82cfea-d7f2-401c-a623-194934b93b18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"7212b372-e028-4d18-84da-3dff4a58de67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2656ac15-d5bf-4655-9d77-f2e9374cab49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube"}}
	{"specversion":"1.0","id":"8d8ba05d-9588-4d98-9aa1-71bcde84bf47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c6378b4f-802e-44db-9c36-b4343fb96021","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-012000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-012000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-489000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-489000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (104.918084ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-489000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19875
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19875-6942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19875-6942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-489000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-489000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (45.627125ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-489000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-489000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.666348875s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.717840375s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-489000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-489000: (3.310363917s)
--- PASS: TestNoKubernetes/serial/Stop (3.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-489000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-489000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (47.662042ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-489000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-489000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-451000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-180000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-180000 --alsologtostderr -v=3: (3.536501291s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-180000 -n old-k8s-version-180000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-180000 -n old-k8s-version-180000: exit status 7 (50.390166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-180000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-590000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-590000 --alsologtostderr -v=3: (3.495655375s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-590000 -n no-preload-590000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-590000 -n no-preload-590000: exit status 7 (57.017625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-590000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-384000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-384000 --alsologtostderr -v=3: (3.531654375s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-220000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-220000 --alsologtostderr -v=3: (3.788897083s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-384000 -n embed-certs-384000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-384000 -n embed-certs-384000: exit status 7 (34.639792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-384000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000: exit status 7 (60.584041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-220000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-641000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-641000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-641000 --alsologtostderr -v=3: (1.913640416s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-641000 -n newest-cni-641000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-641000 -n newest-cni-641000: exit status 7 (67.017375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-641000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-238000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3214354085/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1730116524666909000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3214354085/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1730116524666909000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3214354085/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1730116524666909000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3214354085/001/test-1730116524666909000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (63.174334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
I1028 04:55:24.731290    7452 retry.go:31] will retry after 500.519974ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.779667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
I1028 04:55:25.325959    7452 retry.go:31] will retry after 1.111643086s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.611083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
I1028 04:55:26.530556    7452 retry.go:31] will retry after 1.66062689s: exit status 83
I1028 04:55:26.952807    7452 retry.go:31] will retry after 4.02083967s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.449208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
I1028 04:55:28.283040    7452 retry.go:31] will retry after 1.422895977s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (96.28575ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
I1028 04:55:29.804613    7452 retry.go:31] will retry after 3.70804737s: exit status 83
I1028 04:55:30.976025    7452 retry.go:31] will retry after 9.73158824s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (95.050333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
I1028 04:55:33.610165    7452 retry.go:31] will retry after 4.700168073s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.538417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "sudo umount -f /mount-9p": exit status 83 (48.81925ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-238000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-238000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3214354085/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (13.90s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (12.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-238000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1605480439/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (70.664708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
I1028 04:55:38.643402    7452 retry.go:31] will retry after 740.086792ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.145417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
I1028 04:55:39.476002    7452 retry.go:31] will retry after 549.392213ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.846458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
I1028 04:55:40.119588    7452 retry.go:31] will retry after 667.137702ms: exit status 83
I1028 04:55:40.710065    7452 retry.go:31] will retry after 8.986240108s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.735875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
I1028 04:55:40.881787    7452 retry.go:31] will retry after 1.868778592s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.882208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
I1028 04:55:42.844904    7452 retry.go:31] will retry after 2.54015026s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.364417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
I1028 04:55:45.475874    7452 retry.go:31] will retry after 5.296568871s: exit status 83
I1028 04:55:49.698738    7452 retry.go:31] will retry after 13.77696699s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.071875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "sudo umount -f /mount-9p": exit status 83 (49.65625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-238000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-238000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1605480439/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (12.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (15.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-238000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3998320924/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-238000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3998320924/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-238000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3998320924/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T" /mount1: exit status 83 (81.787792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
I1028 04:55:51.123404    7452 retry.go:31] will retry after 273.014169ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T" /mount1: exit status 83 (89.235167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
I1028 04:55:51.487897    7452 retry.go:31] will retry after 744.548105ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T" /mount1: exit status 83 (93.619833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
I1028 04:55:52.328401    7452 retry.go:31] will retry after 771.555493ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T" /mount1: exit status 83 (93.414458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
I1028 04:55:53.195874    7452 retry.go:31] will retry after 1.90533307s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T" /mount1: exit status 83 (93.2715ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
I1028 04:55:55.196812    7452 retry.go:31] will retry after 3.217327244s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T" /mount1: exit status 83 (89.006542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
I1028 04:55:58.505555    7452 retry.go:31] will retry after 2.420688159s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T" /mount1: exit status 83 (90.43425ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
I1028 04:56:01.018992    7452 retry.go:31] will retry after 4.633062147s: exit status 83
I1028 04:56:03.478128    7452 retry.go:31] will retry after 36.556278013s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-238000 ssh "findmnt -T" /mount1: exit status 83 (90.9645ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-238000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-238000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3998320924/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-238000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3998320924/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-238000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3998320924/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (15.11s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-181000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-181000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-181000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-181000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-181000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-181000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-181000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-181000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-181000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-181000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-181000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-181000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-181000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-181000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-181000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-181000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-181000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-181000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-181000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-181000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-181000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-181000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-181000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-181000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-181000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-181000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-181000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-181000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-181000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-181000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-181000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-181000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-181000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181000"

                                                
                                                
----------------------- debugLogs end: cilium-181000 [took: 2.3657735s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-181000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-181000
--- SKIP: TestNetworkPlugins/group/cilium (2.48s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-193000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-193000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                    
Copied to clipboard