Test Report: QEMU_macOS 19644

                    
                      c0eea096ace35e11d6c690a668e6718dc1bec60e:2024-09-14:36219
                    
                

Test fail (157/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.88
7 TestDownloadOnly/v1.20.0/kubectl 0
21 TestBinaryMirror 0.27
22 TestOffline 10.21
27 TestAddons/Setup 10.5
28 TestCertOptions 10.16
29 TestCertExpiration 197.38
30 TestDockerFlags 12.25
31 TestForceSystemdFlag 11.54
32 TestForceSystemdEnv 10.14
38 TestErrorSpam/setup 9.83
47 TestFunctional/serial/StartWithProxy 10.15
49 TestFunctional/serial/SoftStart 5.25
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.16
61 TestFunctional/serial/MinikubeKubectlCmd 1.95
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.05
63 TestFunctional/serial/ExtraConfig 5.25
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.07
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.2
73 TestFunctional/parallel/StatusCmd 0.17
77 TestFunctional/parallel/ServiceCmdConnect 0.13
79 TestFunctional/parallel/PersistentVolumeClaim 0.03
81 TestFunctional/parallel/SSHCmd 0.12
82 TestFunctional/parallel/CpCmd 0.3
84 TestFunctional/parallel/FileSync 0.07
85 TestFunctional/parallel/CertSync 0.29
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.05
95 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
98 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
99 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 115.18
100 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
101 TestFunctional/parallel/ServiceCmd/List 0.04
102 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
103 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
104 TestFunctional/parallel/ServiceCmd/Format 0.04
105 TestFunctional/parallel/ServiceCmd/URL 0.04
113 TestFunctional/parallel/Version/components 0.04
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.03
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
118 TestFunctional/parallel/ImageCommands/ImageBuild 0.12
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.3
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.29
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.16
123 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
125 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
127 TestFunctional/parallel/DockerEnv/bash 0.05
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 30.05
141 TestMultiControlPlane/serial/StartCluster 10.1
142 TestMultiControlPlane/serial/DeployApp 90.43
143 TestMultiControlPlane/serial/PingHostFromPods 0.09
144 TestMultiControlPlane/serial/AddWorkerNode 0.07
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.08
147 TestMultiControlPlane/serial/CopyFile 0.06
148 TestMultiControlPlane/serial/StopSecondaryNode 0.11
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.08
150 TestMultiControlPlane/serial/RestartSecondaryNode 45.59
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.08
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.98
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
155 TestMultiControlPlane/serial/StopCluster 1.93
156 TestMultiControlPlane/serial/RestartCluster 5.25
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
158 TestMultiControlPlane/serial/AddSecondaryNode 0.07
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
162 TestImageBuild/serial/Setup 10
165 TestJSONOutput/start/Command 9.86
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.77
197 TestMountStart/serial/StartWithMountFirst 10.11
200 TestMultiNode/serial/FreshStart2Nodes 9.97
201 TestMultiNode/serial/DeployApp2Nodes 77.21
202 TestMultiNode/serial/PingHostFrom2Pods 0.09
203 TestMultiNode/serial/AddNode 0.07
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.08
206 TestMultiNode/serial/CopyFile 0.06
207 TestMultiNode/serial/StopNode 0.14
208 TestMultiNode/serial/StartAfterStop 53.26
209 TestMultiNode/serial/RestartKeepsNodes 7.44
210 TestMultiNode/serial/DeleteNode 0.1
211 TestMultiNode/serial/StopMultiNode 3.91
212 TestMultiNode/serial/RestartMultiNode 5.26
213 TestMultiNode/serial/ValidateNameConflict 20.57
217 TestPreload 10.22
219 TestScheduledStopUnix 10.12
220 TestSkaffold 12.06
223 TestRunningBinaryUpgrade 619.57
225 TestKubernetesUpgrade 19.44
239 TestStoppedBinaryUpgrade/Upgrade 585.99
249 TestPause/serial/Start 9.88
252 TestNoKubernetes/serial/StartWithK8s 9.87
253 TestNoKubernetes/serial/StartWithStopK8s 7.47
254 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.38
255 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.43
256 TestNoKubernetes/serial/Start 5.26
260 TestNoKubernetes/serial/StartNoArgs 7.2
262 TestNetworkPlugins/group/auto/Start 9.91
263 TestNetworkPlugins/group/kindnet/Start 9.9
264 TestNetworkPlugins/group/calico/Start 9.93
265 TestNetworkPlugins/group/custom-flannel/Start 9.95
266 TestNetworkPlugins/group/false/Start 9.83
267 TestNetworkPlugins/group/enable-default-cni/Start 9.96
268 TestNetworkPlugins/group/flannel/Start 10.03
269 TestNetworkPlugins/group/bridge/Start 9.85
270 TestNetworkPlugins/group/kubenet/Start 9.92
272 TestStartStop/group/old-k8s-version/serial/FirstStart 10.37
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
281 TestStartStop/group/old-k8s-version/serial/Pause 0.11
283 TestStartStop/group/no-preload/serial/FirstStart 9.89
284 TestStartStop/group/no-preload/serial/DeployApp 0.09
285 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
288 TestStartStop/group/no-preload/serial/SecondStart 5.25
289 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
290 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
291 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
292 TestStartStop/group/no-preload/serial/Pause 0.1
294 TestStartStop/group/embed-certs/serial/FirstStart 9.93
295 TestStartStop/group/embed-certs/serial/DeployApp 0.09
296 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
299 TestStartStop/group/embed-certs/serial/SecondStart 5.26
300 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
301 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
302 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
303 TestStartStop/group/embed-certs/serial/Pause 0.1
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.89
307 TestStartStop/group/newest-cni/serial/FirstStart 9.91
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
312 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.3
317 TestStartStop/group/newest-cni/serial/SecondStart 5.25
318 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
319 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
320 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
321 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (13.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-312000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-312000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (13.874365167s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5757dc85-b56a-463d-8f4d-e00fea1a3fb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-312000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e9d8abd6-0e3f-4a9c-b1d4-be540aa4670e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19644"}}
	{"specversion":"1.0","id":"b03ea611-380e-470f-a57e-d5a61b150031","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig"}}
	{"specversion":"1.0","id":"7fbaf3bc-88df-4b9f-9e7e-72287a3e0581","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"b6da5927-1af8-41e1-b61d-27e90bf3fc16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"35aa6577-5bc5-4a32-8d7a-1c7b4e0bbe87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube"}}
	{"specversion":"1.0","id":"f133d767-f835-4538-8247-0da2cafbbc10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"53261aa7-07b4-4184-a82f-4fe0a2c47996","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3e5e2562-6b12-444f-9531-0ce0ee40e7f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"1efdbd74-5790-4698-9ee2-9aa9e4b87142","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"361d063b-56e5-41c8-902d-5db423fb3e09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-312000\" primary control-plane node in \"download-only-312000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c9d1417c-f658-4bc7-ad8a-d9b892d06159","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"135ac591-c603-403d-b2ed-dcf3873f7bbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107125780 0x107125780 0x107125780 0x107125780 0x107125780 0x107125780 0x107125780] Decompressors:map[bz2:0x1400000fee0 gz:0x1400000fee8 tar:0x1400000fe60 tar.bz2:0x1400000fe70 tar.gz:0x1400000feb0 tar.xz:0x1400000fec0 tar.zst:0x1400000fed0 tbz2:0x1400000fe70 tgz:0x14
00000feb0 txz:0x1400000fec0 tzst:0x1400000fed0 xz:0x1400000ff00 zip:0x1400000ff10 zst:0x1400000ff08] Getters:map[file:0x140018045b0 http:0x140008a4280 https:0x140008a42d0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"c61a1307-bb39-48ff-ac40-ba33fc8a03c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:28:44.902977    7095 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:28:44.903125    7095 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:28:44.903129    7095 out.go:358] Setting ErrFile to fd 2...
	I0914 23:28:44.903132    7095 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:28:44.903255    7095 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	W0914 23:28:44.903352    7095 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19644-6577/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19644-6577/.minikube/config/config.json: no such file or directory
	I0914 23:28:44.904731    7095 out.go:352] Setting JSON to true
	I0914 23:28:44.922895    7095 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5293,"bootTime":1726376431,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:28:44.922970    7095 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:28:44.927812    7095 out.go:97] [download-only-312000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:28:44.927953    7095 notify.go:220] Checking for updates...
	W0914 23:28:44.928083    7095 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball: no such file or directory
	I0914 23:28:44.931984    7095 out.go:169] MINIKUBE_LOCATION=19644
	I0914 23:28:44.935806    7095 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:28:44.940310    7095 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:28:44.944079    7095 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:28:44.948826    7095 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	W0914 23:28:44.956500    7095 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 23:28:44.956686    7095 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:28:44.960374    7095 out.go:97] Using the qemu2 driver based on user configuration
	I0914 23:28:44.960392    7095 start.go:297] selected driver: qemu2
	I0914 23:28:44.960406    7095 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:28:44.960470    7095 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:28:44.964217    7095 out.go:169] Automatically selected the socket_vmnet network
	I0914 23:28:44.970773    7095 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0914 23:28:44.970882    7095 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 23:28:44.970927    7095 cni.go:84] Creating CNI manager for ""
	I0914 23:28:44.970958    7095 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0914 23:28:44.971006    7095 start.go:340] cluster config:
	{Name:download-only-312000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-312000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:28:44.975147    7095 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:28:44.977779    7095 out.go:97] Downloading VM boot image ...
	I0914 23:28:44.977795    7095 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso
	I0914 23:28:52.110719    7095 out.go:97] Starting "download-only-312000" primary control-plane node in "download-only-312000" cluster
	I0914 23:28:52.110744    7095 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0914 23:28:52.167091    7095 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0914 23:28:52.167114    7095 cache.go:56] Caching tarball of preloaded images
	I0914 23:28:52.168299    7095 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0914 23:28:52.173258    7095 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0914 23:28:52.173264    7095 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0914 23:28:52.260996    7095 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0914 23:28:57.353129    7095 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0914 23:28:57.353295    7095 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0914 23:28:58.048920    7095 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0914 23:28:58.049146    7095 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/download-only-312000/config.json ...
	I0914 23:28:58.049163    7095 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/download-only-312000/config.json: {Name:mk3ecd4c85776eff039951c78276834f03d90b00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:28:58.050275    7095 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0914 23:28:58.050667    7095 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0914 23:28:58.696395    7095 out.go:193] 
	W0914 23:28:58.702304    7095 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107125780 0x107125780 0x107125780 0x107125780 0x107125780 0x107125780 0x107125780] Decompressors:map[bz2:0x1400000fee0 gz:0x1400000fee8 tar:0x1400000fe60 tar.bz2:0x1400000fe70 tar.gz:0x1400000feb0 tar.xz:0x1400000fec0 tar.zst:0x1400000fed0 tbz2:0x1400000fe70 tgz:0x1400000feb0 txz:0x1400000fec0 tzst:0x1400000fed0 xz:0x1400000ff00 zip:0x1400000ff10 zst:0x1400000ff08] Getters:map[file:0x140018045b0 http:0x140008a4280 https:0x140008a42d0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0914 23:28:58.702329    7095 out_reason.go:110] 
	W0914 23:28:58.711164    7095 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:28:58.715204    7095 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-312000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (13.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestBinaryMirror (0.27s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-368000 --alsologtostderr --binary-mirror http://127.0.0.1:51049 --driver=qemu2 
aaa_download_only_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-arm64 start --download-only -p binary-mirror-368000 --alsologtostderr --binary-mirror http://127.0.0.1:51049 --driver=qemu2 : exit status 40 (164.3335ms)

                                                
                                                
-- stdout --
	* [binary-mirror-368000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "binary-mirror-368000" primary control-plane node in "binary-mirror-368000" cluster
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:29:06.045768    7156 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:29:06.045899    7156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:29:06.045903    7156 out.go:358] Setting ErrFile to fd 2...
	I0914 23:29:06.045905    7156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:29:06.046053    7156 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:29:06.047125    7156 out.go:352] Setting JSON to false
	I0914 23:29:06.063052    7156 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5315,"bootTime":1726376431,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:29:06.063123    7156 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:29:06.068467    7156 out.go:177] * [binary-mirror-368000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:29:06.076207    7156 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:29:06.076312    7156 notify.go:220] Checking for updates...
	I0914 23:29:06.084372    7156 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:29:06.087389    7156 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:29:06.090415    7156 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:29:06.093410    7156 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:29:06.096576    7156 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:29:06.100329    7156 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 23:29:06.107409    7156 start.go:297] selected driver: qemu2
	I0914 23:29:06.107416    7156 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:29:06.107482    7156 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:29:06.111454    7156 out.go:177] * Automatically selected the socket_vmnet network
	I0914 23:29:06.116658    7156 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0914 23:29:06.116836    7156 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 23:29:06.116857    7156 cni.go:84] Creating CNI manager for ""
	I0914 23:29:06.116881    7156 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:29:06.116889    7156 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 23:29:06.116932    7156 start.go:340] cluster config:
	{Name:binary-mirror-368000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:binary-mirror-368000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:http://127.0.0.1:51049 DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_
vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:29:06.120564    7156 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:29:06.128412    7156 out.go:177] * Starting "binary-mirror-368000" primary control-plane node in "binary-mirror-368000" cluster
	I0914 23:29:06.132387    7156 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:29:06.132411    7156 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:29:06.132422    7156 cache.go:56] Caching tarball of preloaded images
	I0914 23:29:06.132525    7156 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:29:06.132531    7156 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:29:06.132782    7156 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/binary-mirror-368000/config.json ...
	I0914 23:29:06.132797    7156 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/binary-mirror-368000/config.json: {Name:mk31c38d2baca094f9175cbe8531bbfa903a25ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:29:06.133148    7156 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:29:06.133205    7156 download.go:107] Downloading: http://127.0.0.1:51049/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:51049/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	I0914 23:29:06.159499    7156 out.go:201] 
	W0914 23:29:06.163375    7156 out.go:270] X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:51049/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:51049/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:51049/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:51049/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x105689780 0x105689780 0x105689780 0x105689780 0x105689780 0x105689780 0x105689780] Decompressors:map[bz2:0x14000486530 gz:0x14000486538 tar:0x14000486230 tar.bz2:0x140004863c0 tar.gz:0x14000486430 tar.xz:0x140004864a0 tar.zst:0x14000486510 tbz2:0x140004863c0 tgz:0x14000486430 txz:0x140004864a0 tzst:0x14000486510 xz:0x14000486590 zip:0x140004865b0 zst:0x14000486598] Getters:map[file:0x140003f6810 http:0x1400051dea0 https:0x1400051def0] Dir:
false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: unexpected EOF
	X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:51049/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:51049/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:51049/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:51049/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x105689780 0x105689780 0x105689780 0x105689780 0x105689780 0x105689780 0x105689780] Decompressors:map[bz2:0x14000486530 gz:0x14000486538 tar:0x14000486230 tar.bz2:0x140004863c0 tar.gz:0x14000486430 tar.xz:0x140004864a0 tar.zst:0x14000486510 tbz2:0x140004863c0 tgz:0x14000486430 txz:0x140004864a0 tzst:0x14000486510 xz:0x14000486590 zip:0x140004865b0 zst:0x14000486598] Getters:map[file:0x140003f6810 http:0x1400051dea0 https:0x1400051def0] Dir:false ProgressListener:<nil> Insecure:fals
e DisableSymlinks:false Options:[]}: unexpected EOF
	W0914 23:29:06.163381    7156 out.go:270] * 
	* 
	W0914 23:29:06.163843    7156 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:29:06.176405    7156 out.go:201] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:315: start with --binary-mirror failed ["start" "--download-only" "-p" "binary-mirror-368000" "--alsologtostderr" "--binary-mirror" "http://127.0.0.1:51049" "--driver=qemu2" ""] : exit status 40
helpers_test.go:175: Cleaning up "binary-mirror-368000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-368000
--- FAIL: TestBinaryMirror (0.27s)

                                                
                                    
x
+
TestOffline (10.21s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-506000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-506000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (10.03875825s)

                                                
                                                
-- stdout --
	* [offline-docker-506000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-506000" primary control-plane node in "offline-docker-506000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-506000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:39:52.274803    8751 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:39:52.274957    8751 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:39:52.274962    8751 out.go:358] Setting ErrFile to fd 2...
	I0914 23:39:52.274964    8751 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:39:52.275096    8751 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:39:52.276352    8751 out.go:352] Setting JSON to false
	I0914 23:39:52.294193    8751 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5961,"bootTime":1726376431,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:39:52.294269    8751 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:39:52.299486    8751 out.go:177] * [offline-docker-506000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:39:52.307571    8751 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:39:52.307626    8751 notify.go:220] Checking for updates...
	I0914 23:39:52.314543    8751 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:39:52.317486    8751 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:39:52.320448    8751 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:39:52.323579    8751 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:39:52.326515    8751 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:39:52.329775    8751 config.go:182] Loaded profile config "multinode-053000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:39:52.329829    8751 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:39:52.332468    8751 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 23:39:52.339460    8751 start.go:297] selected driver: qemu2
	I0914 23:39:52.339471    8751 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:39:52.339478    8751 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:39:52.341529    8751 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:39:52.344490    8751 out.go:177] * Automatically selected the socket_vmnet network
	I0914 23:39:52.348642    8751 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:39:52.348672    8751 cni.go:84] Creating CNI manager for ""
	I0914 23:39:52.348695    8751 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:39:52.348699    8751 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 23:39:52.348732    8751 start.go:340] cluster config:
	{Name:offline-docker-506000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-506000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:39:52.352168    8751 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:39:52.356508    8751 out.go:177] * Starting "offline-docker-506000" primary control-plane node in "offline-docker-506000" cluster
	I0914 23:39:52.363461    8751 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:39:52.363495    8751 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:39:52.363508    8751 cache.go:56] Caching tarball of preloaded images
	I0914 23:39:52.363577    8751 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:39:52.363582    8751 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:39:52.363652    8751 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/offline-docker-506000/config.json ...
	I0914 23:39:52.363662    8751 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/offline-docker-506000/config.json: {Name:mkc770ee30d2110376c9f3827e27362e810bdeb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:39:52.363893    8751 start.go:360] acquireMachinesLock for offline-docker-506000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:39:52.363928    8751 start.go:364] duration metric: took 25.417µs to acquireMachinesLock for "offline-docker-506000"
	I0914 23:39:52.363938    8751 start.go:93] Provisioning new machine with config: &{Name:offline-docker-506000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-506000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:39:52.363971    8751 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:39:52.366503    8751 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0914 23:39:52.382630    8751 start.go:159] libmachine.API.Create for "offline-docker-506000" (driver="qemu2")
	I0914 23:39:52.382656    8751 client.go:168] LocalClient.Create starting
	I0914 23:39:52.382729    8751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:39:52.382761    8751 main.go:141] libmachine: Decoding PEM data...
	I0914 23:39:52.382769    8751 main.go:141] libmachine: Parsing certificate...
	I0914 23:39:52.382811    8751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:39:52.382833    8751 main.go:141] libmachine: Decoding PEM data...
	I0914 23:39:52.382846    8751 main.go:141] libmachine: Parsing certificate...
	I0914 23:39:52.383224    8751 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:39:52.551036    8751 main.go:141] libmachine: Creating SSH key...
	I0914 23:39:52.760219    8751 main.go:141] libmachine: Creating Disk image...
	I0914 23:39:52.760231    8751 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:39:52.760452    8751 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/offline-docker-506000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/offline-docker-506000/disk.qcow2
	I0914 23:39:52.778547    8751 main.go:141] libmachine: STDOUT: 
	I0914 23:39:52.778569    8751 main.go:141] libmachine: STDERR: 
	I0914 23:39:52.778644    8751 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/offline-docker-506000/disk.qcow2 +20000M
	I0914 23:39:52.787148    8751 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:39:52.787169    8751 main.go:141] libmachine: STDERR: 
	I0914 23:39:52.787182    8751 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/offline-docker-506000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/offline-docker-506000/disk.qcow2
	I0914 23:39:52.787187    8751 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:39:52.787198    8751 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:39:52.787224    8751 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/offline-docker-506000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/offline-docker-506000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/offline-docker-506000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:69:f7:30:33:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/offline-docker-506000/disk.qcow2
	I0914 23:39:52.789051    8751 main.go:141] libmachine: STDOUT: 
	I0914 23:39:52.789067    8751 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:39:52.789099    8751 client.go:171] duration metric: took 406.443292ms to LocalClient.Create
	I0914 23:39:54.791183    8751 start.go:128] duration metric: took 2.427247875s to createHost
	I0914 23:39:54.791242    8751 start.go:83] releasing machines lock for "offline-docker-506000", held for 2.427331834s
	W0914 23:39:54.791274    8751 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:39:54.804129    8751 out.go:177] * Deleting "offline-docker-506000" in qemu2 ...
	W0914 23:39:54.819094    8751 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:39:54.819103    8751 start.go:729] Will try again in 5 seconds ...
	I0914 23:39:59.821294    8751 start.go:360] acquireMachinesLock for offline-docker-506000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:39:59.821780    8751 start.go:364] duration metric: took 352.709µs to acquireMachinesLock for "offline-docker-506000"
	I0914 23:39:59.821917    8751 start.go:93] Provisioning new machine with config: &{Name:offline-docker-506000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-506000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:39:59.822243    8751 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:39:59.838859    8751 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0914 23:39:59.889883    8751 start.go:159] libmachine.API.Create for "offline-docker-506000" (driver="qemu2")
	I0914 23:39:59.889930    8751 client.go:168] LocalClient.Create starting
	I0914 23:39:59.890045    8751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:39:59.890109    8751 main.go:141] libmachine: Decoding PEM data...
	I0914 23:39:59.890125    8751 main.go:141] libmachine: Parsing certificate...
	I0914 23:39:59.890183    8751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:39:59.890229    8751 main.go:141] libmachine: Decoding PEM data...
	I0914 23:39:59.890249    8751 main.go:141] libmachine: Parsing certificate...
	I0914 23:39:59.890747    8751 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:40:00.072777    8751 main.go:141] libmachine: Creating SSH key...
	I0914 23:40:00.210715    8751 main.go:141] libmachine: Creating Disk image...
	I0914 23:40:00.210722    8751 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:40:00.210957    8751 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/offline-docker-506000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/offline-docker-506000/disk.qcow2
	I0914 23:40:00.220144    8751 main.go:141] libmachine: STDOUT: 
	I0914 23:40:00.220164    8751 main.go:141] libmachine: STDERR: 
	I0914 23:40:00.220232    8751 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/offline-docker-506000/disk.qcow2 +20000M
	I0914 23:40:00.228126    8751 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:40:00.228142    8751 main.go:141] libmachine: STDERR: 
	I0914 23:40:00.228158    8751 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/offline-docker-506000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/offline-docker-506000/disk.qcow2
	I0914 23:40:00.228164    8751 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:40:00.228172    8751 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:40:00.228215    8751 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/offline-docker-506000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/offline-docker-506000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/offline-docker-506000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:82:ba:26:aa:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/offline-docker-506000/disk.qcow2
	I0914 23:40:00.229833    8751 main.go:141] libmachine: STDOUT: 
	I0914 23:40:00.229847    8751 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:40:00.229861    8751 client.go:171] duration metric: took 339.9315ms to LocalClient.Create
	I0914 23:40:02.232055    8751 start.go:128] duration metric: took 2.409806208s to createHost
	I0914 23:40:02.232153    8751 start.go:83] releasing machines lock for "offline-docker-506000", held for 2.410393375s
	W0914 23:40:02.232592    8751 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-506000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-506000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:40:02.248359    8751 out.go:201] 
	W0914 23:40:02.251298    8751 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:40:02.251367    8751 out.go:270] * 
	* 
	W0914 23:40:02.253759    8751 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:40:02.268311    8751 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-506000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-14 23:40:02.285403 -0700 PDT m=+677.477388626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-506000 -n offline-docker-506000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-506000 -n offline-docker-506000: exit status 7 (65.791917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-506000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-506000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-506000
--- FAIL: TestOffline (10.21s)

                                                
                                    
x
+
TestAddons/Setup (10.5s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-013000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-013000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.502398084s)

                                                
                                                
-- stdout --
	* [addons-013000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-013000" primary control-plane node in "addons-013000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-013000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:29:06.339038    7170 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:29:06.339176    7170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:29:06.339179    7170 out.go:358] Setting ErrFile to fd 2...
	I0914 23:29:06.339182    7170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:29:06.339308    7170 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:29:06.340390    7170 out.go:352] Setting JSON to false
	I0914 23:29:06.356448    7170 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5315,"bootTime":1726376431,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:29:06.356515    7170 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:29:06.360416    7170 out.go:177] * [addons-013000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:29:06.367410    7170 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:29:06.367444    7170 notify.go:220] Checking for updates...
	I0914 23:29:06.374401    7170 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:29:06.377399    7170 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:29:06.380380    7170 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:29:06.383424    7170 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:29:06.386317    7170 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:29:06.389501    7170 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:29:06.393388    7170 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 23:29:06.400373    7170 start.go:297] selected driver: qemu2
	I0914 23:29:06.400381    7170 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:29:06.400391    7170 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:29:06.402652    7170 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:29:06.405365    7170 out.go:177] * Automatically selected the socket_vmnet network
	I0914 23:29:06.408411    7170 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:29:06.408427    7170 cni.go:84] Creating CNI manager for ""
	I0914 23:29:06.408453    7170 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:29:06.408457    7170 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 23:29:06.408490    7170 start.go:340] cluster config:
	{Name:addons-013000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:29:06.412236    7170 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:29:06.420352    7170 out.go:177] * Starting "addons-013000" primary control-plane node in "addons-013000" cluster
	I0914 23:29:06.424304    7170 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:29:06.424331    7170 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:29:06.424342    7170 cache.go:56] Caching tarball of preloaded images
	I0914 23:29:06.424420    7170 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:29:06.424426    7170 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:29:06.424648    7170 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/addons-013000/config.json ...
	I0914 23:29:06.424659    7170 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/addons-013000/config.json: {Name:mk0ce5166874fbdcd2fdd2019e14a16df30236c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:29:06.425208    7170 start.go:360] acquireMachinesLock for addons-013000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:29:06.425489    7170 start.go:364] duration metric: took 274.542µs to acquireMachinesLock for "addons-013000"
	I0914 23:29:06.425503    7170 start.go:93] Provisioning new machine with config: &{Name:addons-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:29:06.425528    7170 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:29:06.429494    7170 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0914 23:29:06.447902    7170 start.go:159] libmachine.API.Create for "addons-013000" (driver="qemu2")
	I0914 23:29:06.447956    7170 client.go:168] LocalClient.Create starting
	I0914 23:29:06.448087    7170 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:29:06.751819    7170 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:29:06.895986    7170 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:29:07.199736    7170 main.go:141] libmachine: Creating SSH key...
	I0914 23:29:07.312471    7170 main.go:141] libmachine: Creating Disk image...
	I0914 23:29:07.312476    7170 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:29:07.312736    7170 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/addons-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/addons-013000/disk.qcow2
	I0914 23:29:07.322162    7170 main.go:141] libmachine: STDOUT: 
	I0914 23:29:07.322188    7170 main.go:141] libmachine: STDERR: 
	I0914 23:29:07.322249    7170 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/addons-013000/disk.qcow2 +20000M
	I0914 23:29:07.330089    7170 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:29:07.330101    7170 main.go:141] libmachine: STDERR: 
	I0914 23:29:07.330115    7170 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/addons-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/addons-013000/disk.qcow2
	I0914 23:29:07.330122    7170 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:29:07.330161    7170 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:29:07.330190    7170 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/addons-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/addons-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/addons-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:a6:c8:42:8c:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/addons-013000/disk.qcow2
	I0914 23:29:07.331840    7170 main.go:141] libmachine: STDOUT: 
	I0914 23:29:07.331854    7170 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:29:07.331890    7170 client.go:171] duration metric: took 883.93425ms to LocalClient.Create
	I0914 23:29:09.334037    7170 start.go:128] duration metric: took 2.908541917s to createHost
	I0914 23:29:09.334098    7170 start.go:83] releasing machines lock for "addons-013000", held for 2.908650042s
	W0914 23:29:09.334141    7170 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:29:09.341389    7170 out.go:177] * Deleting "addons-013000" in qemu2 ...
	W0914 23:29:09.376617    7170 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:29:09.376637    7170 start.go:729] Will try again in 5 seconds ...
	I0914 23:29:14.378788    7170 start.go:360] acquireMachinesLock for addons-013000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:29:14.379230    7170 start.go:364] duration metric: took 349.708µs to acquireMachinesLock for "addons-013000"
	I0914 23:29:14.379336    7170 start.go:93] Provisioning new machine with config: &{Name:addons-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:29:14.379653    7170 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:29:14.400322    7170 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0914 23:29:14.450911    7170 start.go:159] libmachine.API.Create for "addons-013000" (driver="qemu2")
	I0914 23:29:14.450948    7170 client.go:168] LocalClient.Create starting
	I0914 23:29:14.451070    7170 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:29:14.451126    7170 main.go:141] libmachine: Decoding PEM data...
	I0914 23:29:14.451149    7170 main.go:141] libmachine: Parsing certificate...
	I0914 23:29:14.451249    7170 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:29:14.451302    7170 main.go:141] libmachine: Decoding PEM data...
	I0914 23:29:14.451321    7170 main.go:141] libmachine: Parsing certificate...
	I0914 23:29:14.451881    7170 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:29:14.626021    7170 main.go:141] libmachine: Creating SSH key...
	I0914 23:29:14.745577    7170 main.go:141] libmachine: Creating Disk image...
	I0914 23:29:14.745583    7170 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:29:14.745848    7170 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/addons-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/addons-013000/disk.qcow2
	I0914 23:29:14.755634    7170 main.go:141] libmachine: STDOUT: 
	I0914 23:29:14.755659    7170 main.go:141] libmachine: STDERR: 
	I0914 23:29:14.755729    7170 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/addons-013000/disk.qcow2 +20000M
	I0914 23:29:14.763636    7170 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:29:14.763664    7170 main.go:141] libmachine: STDERR: 
	I0914 23:29:14.763678    7170 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/addons-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/addons-013000/disk.qcow2
	I0914 23:29:14.763683    7170 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:29:14.763694    7170 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:29:14.763729    7170 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/addons-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/addons-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/addons-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:70:bf:3b:70:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/addons-013000/disk.qcow2
	I0914 23:29:14.765448    7170 main.go:141] libmachine: STDOUT: 
	I0914 23:29:14.765461    7170 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:29:14.765475    7170 client.go:171] duration metric: took 314.528333ms to LocalClient.Create
	I0914 23:29:16.767694    7170 start.go:128] duration metric: took 2.388034292s to createHost
	I0914 23:29:16.767776    7170 start.go:83] releasing machines lock for "addons-013000", held for 2.388566292s
	W0914 23:29:16.768094    7170 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p addons-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:29:16.778645    7170 out.go:201] 
	W0914 23:29:16.786805    7170 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:29:16.786874    7170 out.go:270] * 
	* 
	W0914 23:29:16.789565    7170 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:29:16.798712    7170 out.go:201] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-arm64 start -p addons-013000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.50s)

                                                
                                    
x
+
TestCertOptions (10.16s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-287000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-287000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.889968958s)

                                                
                                                
-- stdout --
	* [cert-options-287000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-287000" primary control-plane node in "cert-options-287000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-287000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-287000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-287000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-287000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-287000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (82.817708ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-287000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-287000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-287000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-287000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-287000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-287000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (42.479666ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-287000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-287000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-287000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-287000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-287000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-14 23:51:30.674596 -0700 PDT m=+1365.854100751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-287000 -n cert-options-287000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-287000 -n cert-options-287000: exit status 7 (30.905667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-287000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-287000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-287000
--- FAIL: TestCertOptions (10.16s)

                                                
                                    
x
+
TestCertExpiration (197.38s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-528000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-528000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (11.981189s)

                                                
                                                
-- stdout --
	* [cert-expiration-528000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-528000" primary control-plane node in "cert-expiration-528000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-528000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-528000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-528000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-528000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-528000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.237025292s)

                                                
                                                
-- stdout --
	* [cert-expiration-528000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-528000" primary control-plane node in "cert-expiration-528000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-528000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-528000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-528000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-528000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-528000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-528000" primary control-plane node in "cert-expiration-528000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-528000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-528000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-528000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-14 23:54:23.265324 -0700 PDT m=+1538.446873167
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-528000 -n cert-expiration-528000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-528000 -n cert-expiration-528000: exit status 7 (67.517125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-528000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-528000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-528000
--- FAIL: TestCertExpiration (197.38s)

                                                
                                    
x
+
TestDockerFlags (12.25s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-495000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-495000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.00761575s)

                                                
                                                
-- stdout --
	* [docker-flags-495000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-495000" primary control-plane node in "docker-flags-495000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-495000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:51:08.407468    9313 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:51:08.407631    9313 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:51:08.407634    9313 out.go:358] Setting ErrFile to fd 2...
	I0914 23:51:08.407637    9313 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:51:08.407768    9313 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:51:08.409021    9313 out.go:352] Setting JSON to false
	I0914 23:51:08.426009    9313 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6637,"bootTime":1726376431,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:51:08.426084    9313 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:51:08.433887    9313 out.go:177] * [docker-flags-495000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:51:08.444803    9313 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:51:08.444854    9313 notify.go:220] Checking for updates...
	I0914 23:51:08.453685    9313 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:51:08.457778    9313 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:51:08.460872    9313 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:51:08.463778    9313 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:51:08.466818    9313 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:51:08.470128    9313 config.go:182] Loaded profile config "cert-expiration-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:51:08.470193    9313 config.go:182] Loaded profile config "multinode-053000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:51:08.470237    9313 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:51:08.473815    9313 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 23:51:08.480806    9313 start.go:297] selected driver: qemu2
	I0914 23:51:08.480811    9313 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:51:08.480817    9313 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:51:08.482932    9313 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:51:08.485797    9313 out.go:177] * Automatically selected the socket_vmnet network
	I0914 23:51:08.488924    9313 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0914 23:51:08.488938    9313 cni.go:84] Creating CNI manager for ""
	I0914 23:51:08.488958    9313 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:51:08.488969    9313 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 23:51:08.488998    9313 start.go:340] cluster config:
	{Name:docker-flags-495000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-495000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:51:08.492419    9313 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:51:08.499745    9313 out.go:177] * Starting "docker-flags-495000" primary control-plane node in "docker-flags-495000" cluster
	I0914 23:51:08.503857    9313 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:51:08.503870    9313 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:51:08.503884    9313 cache.go:56] Caching tarball of preloaded images
	I0914 23:51:08.503934    9313 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:51:08.503939    9313 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:51:08.503996    9313 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/docker-flags-495000/config.json ...
	I0914 23:51:08.504006    9313 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/docker-flags-495000/config.json: {Name:mk92c6f9acdad4eb23442a8288c60d96ccad721e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:51:08.504203    9313 start.go:360] acquireMachinesLock for docker-flags-495000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:51:10.563709    9313 start.go:364] duration metric: took 2.059501583s to acquireMachinesLock for "docker-flags-495000"
	I0914 23:51:10.563847    9313 start.go:93] Provisioning new machine with config: &{Name:docker-flags-495000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-495000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:51:10.564097    9313 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:51:10.569722    9313 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0914 23:51:10.620848    9313 start.go:159] libmachine.API.Create for "docker-flags-495000" (driver="qemu2")
	I0914 23:51:10.620903    9313 client.go:168] LocalClient.Create starting
	I0914 23:51:10.621033    9313 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:51:10.621095    9313 main.go:141] libmachine: Decoding PEM data...
	I0914 23:51:10.621114    9313 main.go:141] libmachine: Parsing certificate...
	I0914 23:51:10.621178    9313 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:51:10.621221    9313 main.go:141] libmachine: Decoding PEM data...
	I0914 23:51:10.621233    9313 main.go:141] libmachine: Parsing certificate...
	I0914 23:51:10.621845    9313 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:51:10.785077    9313 main.go:141] libmachine: Creating SSH key...
	I0914 23:51:10.934918    9313 main.go:141] libmachine: Creating Disk image...
	I0914 23:51:10.934924    9313 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:51:10.935172    9313 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/docker-flags-495000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/docker-flags-495000/disk.qcow2
	I0914 23:51:10.944550    9313 main.go:141] libmachine: STDOUT: 
	I0914 23:51:10.944567    9313 main.go:141] libmachine: STDERR: 
	I0914 23:51:10.944660    9313 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/docker-flags-495000/disk.qcow2 +20000M
	I0914 23:51:10.952442    9313 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:51:10.952465    9313 main.go:141] libmachine: STDERR: 
	I0914 23:51:10.952478    9313 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/docker-flags-495000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/docker-flags-495000/disk.qcow2
	I0914 23:51:10.952484    9313 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:51:10.952492    9313 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:51:10.952527    9313 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/docker-flags-495000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/docker-flags-495000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/docker-flags-495000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:bf:a3:35:46:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/docker-flags-495000/disk.qcow2
	I0914 23:51:10.954126    9313 main.go:141] libmachine: STDOUT: 
	I0914 23:51:10.954141    9313 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:51:10.954161    9313 client.go:171] duration metric: took 333.253875ms to LocalClient.Create
	I0914 23:51:12.956363    9313 start.go:128] duration metric: took 2.392261875s to createHost
	I0914 23:51:12.956428    9313 start.go:83] releasing machines lock for "docker-flags-495000", held for 2.392686458s
	W0914 23:51:12.956474    9313 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:51:12.979867    9313 out.go:177] * Deleting "docker-flags-495000" in qemu2 ...
	W0914 23:51:13.017365    9313 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:51:13.017387    9313 start.go:729] Will try again in 5 seconds ...
	I0914 23:51:18.019577    9313 start.go:360] acquireMachinesLock for docker-flags-495000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:51:18.020028    9313 start.go:364] duration metric: took 316.333µs to acquireMachinesLock for "docker-flags-495000"
	I0914 23:51:18.020155    9313 start.go:93] Provisioning new machine with config: &{Name:docker-flags-495000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-495000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:51:18.020397    9313 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:51:18.030841    9313 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0914 23:51:18.081666    9313 start.go:159] libmachine.API.Create for "docker-flags-495000" (driver="qemu2")
	I0914 23:51:18.081725    9313 client.go:168] LocalClient.Create starting
	I0914 23:51:18.081846    9313 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:51:18.081889    9313 main.go:141] libmachine: Decoding PEM data...
	I0914 23:51:18.081903    9313 main.go:141] libmachine: Parsing certificate...
	I0914 23:51:18.081975    9313 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:51:18.082005    9313 main.go:141] libmachine: Decoding PEM data...
	I0914 23:51:18.082016    9313 main.go:141] libmachine: Parsing certificate...
	I0914 23:51:18.082560    9313 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:51:18.285025    9313 main.go:141] libmachine: Creating SSH key...
	I0914 23:51:18.319986    9313 main.go:141] libmachine: Creating Disk image...
	I0914 23:51:18.319992    9313 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:51:18.320195    9313 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/docker-flags-495000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/docker-flags-495000/disk.qcow2
	I0914 23:51:18.329246    9313 main.go:141] libmachine: STDOUT: 
	I0914 23:51:18.329266    9313 main.go:141] libmachine: STDERR: 
	I0914 23:51:18.329320    9313 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/docker-flags-495000/disk.qcow2 +20000M
	I0914 23:51:18.337275    9313 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:51:18.337288    9313 main.go:141] libmachine: STDERR: 
	I0914 23:51:18.337299    9313 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/docker-flags-495000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/docker-flags-495000/disk.qcow2
	I0914 23:51:18.337307    9313 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:51:18.337320    9313 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:51:18.337343    9313 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/docker-flags-495000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/docker-flags-495000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/docker-flags-495000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:84:62:f0:6a:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/docker-flags-495000/disk.qcow2
	I0914 23:51:18.338962    9313 main.go:141] libmachine: STDOUT: 
	I0914 23:51:18.339005    9313 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:51:18.339020    9313 client.go:171] duration metric: took 257.291458ms to LocalClient.Create
	I0914 23:51:20.341282    9313 start.go:128] duration metric: took 2.320860625s to createHost
	I0914 23:51:20.341382    9313 start.go:83] releasing machines lock for "docker-flags-495000", held for 2.321358375s
	W0914 23:51:20.341850    9313 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-495000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-495000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:51:20.347472    9313 out.go:201] 
	W0914 23:51:20.358483    9313 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:51:20.358513    9313 out.go:270] * 
	* 
	W0914 23:51:20.360898    9313 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:51:20.370351    9313 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-495000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-495000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-495000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (83.861834ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-495000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-495000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-495000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-495000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-495000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-495000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-495000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-495000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-495000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (46.831916ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-495000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-495000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-495000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-495000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-495000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-495000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-14 23:51:20.517167 -0700 PDT m=+1355.696552084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-495000 -n docker-flags-495000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-495000 -n docker-flags-495000: exit status 7 (30.921666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-495000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-495000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-495000
--- FAIL: TestDockerFlags (12.25s)

                                                
                                    
x
+
TestForceSystemdFlag (11.54s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-834000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-834000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.345441041s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-834000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-834000" primary control-plane node in "force-systemd-flag-834000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-834000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:50:33.636282    9161 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:50:33.636416    9161 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:50:33.636419    9161 out.go:358] Setting ErrFile to fd 2...
	I0914 23:50:33.636421    9161 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:50:33.636534    9161 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:50:33.637632    9161 out.go:352] Setting JSON to false
	I0914 23:50:33.653972    9161 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6602,"bootTime":1726376431,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:50:33.654033    9161 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:50:33.657779    9161 out.go:177] * [force-systemd-flag-834000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:50:33.664035    9161 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:50:33.664079    9161 notify.go:220] Checking for updates...
	I0914 23:50:33.670735    9161 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:50:33.673610    9161 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:50:33.677675    9161 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:50:33.680734    9161 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:50:33.683581    9161 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:50:33.687017    9161 config.go:182] Loaded profile config "NoKubernetes-019000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:50:33.687102    9161 config.go:182] Loaded profile config "multinode-053000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:50:33.687149    9161 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:50:33.690703    9161 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 23:50:33.697702    9161 start.go:297] selected driver: qemu2
	I0914 23:50:33.697709    9161 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:50:33.697715    9161 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:50:33.700006    9161 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:50:33.703692    9161 out.go:177] * Automatically selected the socket_vmnet network
	I0914 23:50:33.706812    9161 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 23:50:33.706832    9161 cni.go:84] Creating CNI manager for ""
	I0914 23:50:33.706853    9161 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:50:33.706861    9161 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 23:50:33.706888    9161 start.go:340] cluster config:
	{Name:force-systemd-flag-834000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-834000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:50:33.710703    9161 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:50:33.718694    9161 out.go:177] * Starting "force-systemd-flag-834000" primary control-plane node in "force-systemd-flag-834000" cluster
	I0914 23:50:33.722690    9161 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:50:33.722709    9161 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:50:33.722723    9161 cache.go:56] Caching tarball of preloaded images
	I0914 23:50:33.722797    9161 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:50:33.722802    9161 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:50:33.722861    9161 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/force-systemd-flag-834000/config.json ...
	I0914 23:50:33.722879    9161 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/force-systemd-flag-834000/config.json: {Name:mk4c02686c42358751720247507c58d888c6e32e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:50:33.723104    9161 start.go:360] acquireMachinesLock for force-systemd-flag-834000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:50:35.090838    9161 start.go:364] duration metric: took 1.367701792s to acquireMachinesLock for "force-systemd-flag-834000"
	I0914 23:50:35.091016    9161 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-834000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-834000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:50:35.091259    9161 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:50:35.099742    9161 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0914 23:50:35.150093    9161 start.go:159] libmachine.API.Create for "force-systemd-flag-834000" (driver="qemu2")
	I0914 23:50:35.150140    9161 client.go:168] LocalClient.Create starting
	I0914 23:50:35.150287    9161 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:50:35.150343    9161 main.go:141] libmachine: Decoding PEM data...
	I0914 23:50:35.150366    9161 main.go:141] libmachine: Parsing certificate...
	I0914 23:50:35.150434    9161 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:50:35.150479    9161 main.go:141] libmachine: Decoding PEM data...
	I0914 23:50:35.150494    9161 main.go:141] libmachine: Parsing certificate...
	I0914 23:50:35.151184    9161 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:50:35.322492    9161 main.go:141] libmachine: Creating SSH key...
	I0914 23:50:35.379714    9161 main.go:141] libmachine: Creating Disk image...
	I0914 23:50:35.379720    9161 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:50:35.379957    9161 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-flag-834000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-flag-834000/disk.qcow2
	I0914 23:50:35.389070    9161 main.go:141] libmachine: STDOUT: 
	I0914 23:50:35.389085    9161 main.go:141] libmachine: STDERR: 
	I0914 23:50:35.389158    9161 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-flag-834000/disk.qcow2 +20000M
	I0914 23:50:35.396987    9161 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:50:35.396999    9161 main.go:141] libmachine: STDERR: 
	I0914 23:50:35.397013    9161 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-flag-834000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-flag-834000/disk.qcow2
	I0914 23:50:35.397023    9161 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:50:35.397038    9161 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:50:35.397069    9161 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-flag-834000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-flag-834000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-flag-834000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:02:ed:78:6f:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-flag-834000/disk.qcow2
	I0914 23:50:35.398680    9161 main.go:141] libmachine: STDOUT: 
	I0914 23:50:35.398700    9161 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:50:35.398722    9161 client.go:171] duration metric: took 248.577375ms to LocalClient.Create
	I0914 23:50:37.400865    9161 start.go:128] duration metric: took 2.309606667s to createHost
	I0914 23:50:37.400916    9161 start.go:83] releasing machines lock for "force-systemd-flag-834000", held for 2.31006025s
	W0914 23:50:37.400981    9161 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:50:37.415071    9161 out.go:177] * Deleting "force-systemd-flag-834000" in qemu2 ...
	W0914 23:50:37.453616    9161 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:50:37.453648    9161 start.go:729] Will try again in 5 seconds ...
	I0914 23:50:42.455796    9161 start.go:360] acquireMachinesLock for force-systemd-flag-834000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:50:42.456221    9161 start.go:364] duration metric: took 342.792µs to acquireMachinesLock for "force-systemd-flag-834000"
	I0914 23:50:42.456746    9161 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-834000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-834000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:50:42.457070    9161 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:50:42.470933    9161 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0914 23:50:42.523302    9161 start.go:159] libmachine.API.Create for "force-systemd-flag-834000" (driver="qemu2")
	I0914 23:50:42.523355    9161 client.go:168] LocalClient.Create starting
	I0914 23:50:42.523549    9161 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:50:42.523615    9161 main.go:141] libmachine: Decoding PEM data...
	I0914 23:50:42.523637    9161 main.go:141] libmachine: Parsing certificate...
	I0914 23:50:42.523692    9161 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:50:42.523739    9161 main.go:141] libmachine: Decoding PEM data...
	I0914 23:50:42.523753    9161 main.go:141] libmachine: Parsing certificate...
	I0914 23:50:42.524248    9161 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:50:42.791941    9161 main.go:141] libmachine: Creating SSH key...
	I0914 23:50:42.881692    9161 main.go:141] libmachine: Creating Disk image...
	I0914 23:50:42.881698    9161 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:50:42.881921    9161 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-flag-834000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-flag-834000/disk.qcow2
	I0914 23:50:42.891504    9161 main.go:141] libmachine: STDOUT: 
	I0914 23:50:42.891529    9161 main.go:141] libmachine: STDERR: 
	I0914 23:50:42.891583    9161 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-flag-834000/disk.qcow2 +20000M
	I0914 23:50:42.899451    9161 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:50:42.899470    9161 main.go:141] libmachine: STDERR: 
	I0914 23:50:42.899484    9161 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-flag-834000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-flag-834000/disk.qcow2
	I0914 23:50:42.899489    9161 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:50:42.899495    9161 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:50:42.899532    9161 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-flag-834000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-flag-834000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-flag-834000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:90:2f:16:1a:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-flag-834000/disk.qcow2
	I0914 23:50:42.901215    9161 main.go:141] libmachine: STDOUT: 
	I0914 23:50:42.901227    9161 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:50:42.901239    9161 client.go:171] duration metric: took 377.84325ms to LocalClient.Create
	I0914 23:50:44.903420    9161 start.go:128] duration metric: took 2.446349792s to createHost
	I0914 23:50:44.903468    9161 start.go:83] releasing machines lock for "force-systemd-flag-834000", held for 2.447254167s
	W0914 23:50:44.903802    9161 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-834000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-834000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:50:44.908499    9161 out.go:201] 
	W0914 23:50:44.925507    9161 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:50:44.925546    9161 out.go:270] * 
	* 
	W0914 23:50:44.928039    9161 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:50:44.935455    9161 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-834000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-834000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-834000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.126875ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-834000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-834000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-834000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-14 23:50:45.032615 -0700 PDT m=+1320.211579584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-834000 -n force-systemd-flag-834000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-834000 -n force-systemd-flag-834000: exit status 7 (34.620959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-834000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-834000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-834000
--- FAIL: TestForceSystemdFlag (11.54s)

                                                
                                    
x
+
TestForceSystemdEnv (10.14s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-652000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-652000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.910771s)

                                                
                                                
-- stdout --
	* [force-systemd-env-652000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-652000" primary control-plane node in "force-systemd-env-652000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-652000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:50:58.260646    9268 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:50:58.260764    9268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:50:58.260768    9268 out.go:358] Setting ErrFile to fd 2...
	I0914 23:50:58.260771    9268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:50:58.260888    9268 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:50:58.261978    9268 out.go:352] Setting JSON to false
	I0914 23:50:58.277855    9268 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6627,"bootTime":1726376431,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:50:58.277920    9268 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:50:58.282955    9268 out.go:177] * [force-systemd-env-652000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:50:58.289725    9268 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:50:58.289791    9268 notify.go:220] Checking for updates...
	I0914 23:50:58.297829    9268 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:50:58.303421    9268 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:50:58.307930    9268 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:50:58.310946    9268 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:50:58.313906    9268 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0914 23:50:58.317246    9268 config.go:182] Loaded profile config "NoKubernetes-019000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0914 23:50:58.317314    9268 config.go:182] Loaded profile config "multinode-053000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:50:58.317367    9268 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:50:58.322039    9268 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 23:50:58.328862    9268 start.go:297] selected driver: qemu2
	I0914 23:50:58.328868    9268 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:50:58.328873    9268 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:50:58.331253    9268 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:50:58.333942    9268 out.go:177] * Automatically selected the socket_vmnet network
	I0914 23:50:58.336897    9268 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 23:50:58.336915    9268 cni.go:84] Creating CNI manager for ""
	I0914 23:50:58.336940    9268 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:50:58.336949    9268 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 23:50:58.336980    9268 start.go:340] cluster config:
	{Name:force-systemd-env-652000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-652000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:50:58.340682    9268 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:50:58.345948    9268 out.go:177] * Starting "force-systemd-env-652000" primary control-plane node in "force-systemd-env-652000" cluster
	I0914 23:50:58.348900    9268 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:50:58.348914    9268 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:50:58.348924    9268 cache.go:56] Caching tarball of preloaded images
	I0914 23:50:58.348981    9268 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:50:58.348987    9268 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:50:58.349049    9268 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/force-systemd-env-652000/config.json ...
	I0914 23:50:58.349060    9268 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/force-systemd-env-652000/config.json: {Name:mk18171f52acf608cdbd964f5652e86a0ec9292c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:50:58.349276    9268 start.go:360] acquireMachinesLock for force-systemd-env-652000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:50:58.349313    9268 start.go:364] duration metric: took 30.291µs to acquireMachinesLock for "force-systemd-env-652000"
	I0914 23:50:58.349325    9268 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-652000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-652000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:50:58.349349    9268 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:50:58.358813    9268 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0914 23:50:58.377567    9268 start.go:159] libmachine.API.Create for "force-systemd-env-652000" (driver="qemu2")
	I0914 23:50:58.377598    9268 client.go:168] LocalClient.Create starting
	I0914 23:50:58.377664    9268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:50:58.377700    9268 main.go:141] libmachine: Decoding PEM data...
	I0914 23:50:58.377710    9268 main.go:141] libmachine: Parsing certificate...
	I0914 23:50:58.377754    9268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:50:58.377778    9268 main.go:141] libmachine: Decoding PEM data...
	I0914 23:50:58.377786    9268 main.go:141] libmachine: Parsing certificate...
	I0914 23:50:58.378146    9268 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:50:58.541070    9268 main.go:141] libmachine: Creating SSH key...
	I0914 23:50:58.641977    9268 main.go:141] libmachine: Creating Disk image...
	I0914 23:50:58.641987    9268 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:50:58.642205    9268 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-env-652000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-env-652000/disk.qcow2
	I0914 23:50:58.651734    9268 main.go:141] libmachine: STDOUT: 
	I0914 23:50:58.651756    9268 main.go:141] libmachine: STDERR: 
	I0914 23:50:58.651824    9268 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-env-652000/disk.qcow2 +20000M
	I0914 23:50:58.660343    9268 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:50:58.660372    9268 main.go:141] libmachine: STDERR: 
	I0914 23:50:58.660387    9268 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-env-652000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-env-652000/disk.qcow2
	I0914 23:50:58.660391    9268 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:50:58.660404    9268 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:50:58.660456    9268 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-env-652000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-env-652000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-env-652000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:db:f6:2f:11:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-env-652000/disk.qcow2
	I0914 23:50:58.662251    9268 main.go:141] libmachine: STDOUT: 
	I0914 23:50:58.662267    9268 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:50:58.662294    9268 client.go:171] duration metric: took 284.691625ms to LocalClient.Create
	I0914 23:51:00.664481    9268 start.go:128] duration metric: took 2.315132292s to createHost
	I0914 23:51:00.664568    9268 start.go:83] releasing machines lock for "force-systemd-env-652000", held for 2.3152715s
	W0914 23:51:00.664620    9268 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:51:00.683260    9268 out.go:177] * Deleting "force-systemd-env-652000" in qemu2 ...
	W0914 23:51:00.715158    9268 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:51:00.715184    9268 start.go:729] Will try again in 5 seconds ...
	I0914 23:51:05.717293    9268 start.go:360] acquireMachinesLock for force-systemd-env-652000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:51:05.722981    9268 start.go:364] duration metric: took 5.563583ms to acquireMachinesLock for "force-systemd-env-652000"
	I0914 23:51:05.723046    9268 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-652000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-652000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:51:05.723291    9268 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:51:05.734629    9268 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0914 23:51:05.784305    9268 start.go:159] libmachine.API.Create for "force-systemd-env-652000" (driver="qemu2")
	I0914 23:51:05.784372    9268 client.go:168] LocalClient.Create starting
	I0914 23:51:05.784471    9268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:51:05.784530    9268 main.go:141] libmachine: Decoding PEM data...
	I0914 23:51:05.784549    9268 main.go:141] libmachine: Parsing certificate...
	I0914 23:51:05.784621    9268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:51:05.784664    9268 main.go:141] libmachine: Decoding PEM data...
	I0914 23:51:05.784679    9268 main.go:141] libmachine: Parsing certificate...
	I0914 23:51:05.785183    9268 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:51:05.985349    9268 main.go:141] libmachine: Creating SSH key...
	I0914 23:51:06.083000    9268 main.go:141] libmachine: Creating Disk image...
	I0914 23:51:06.083014    9268 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:51:06.083221    9268 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-env-652000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-env-652000/disk.qcow2
	I0914 23:51:06.092508    9268 main.go:141] libmachine: STDOUT: 
	I0914 23:51:06.092531    9268 main.go:141] libmachine: STDERR: 
	I0914 23:51:06.092610    9268 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-env-652000/disk.qcow2 +20000M
	I0914 23:51:06.102920    9268 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:51:06.102940    9268 main.go:141] libmachine: STDERR: 
	I0914 23:51:06.102952    9268 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-env-652000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-env-652000/disk.qcow2
	I0914 23:51:06.102958    9268 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:51:06.102970    9268 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:51:06.103007    9268 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-env-652000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-env-652000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-env-652000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:42:6d:24:1a:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/force-systemd-env-652000/disk.qcow2
	I0914 23:51:06.104715    9268 main.go:141] libmachine: STDOUT: 
	I0914 23:51:06.104728    9268 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:51:06.104740    9268 client.go:171] duration metric: took 320.367ms to LocalClient.Create
	I0914 23:51:08.106910    9268 start.go:128] duration metric: took 2.383608959s to createHost
	I0914 23:51:08.106994    9268 start.go:83] releasing machines lock for "force-systemd-env-652000", held for 2.384012s
	W0914 23:51:08.107294    9268 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-652000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-652000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:51:08.120825    9268 out.go:201] 
	W0914 23:51:08.124915    9268 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:51:08.124951    9268 out.go:270] * 
	* 
	W0914 23:51:08.126811    9268 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:51:08.135764    9268 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-652000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-652000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-652000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (70.383542ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-652000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-652000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-652000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-14 23:51:08.21711 -0700 PDT m=+1343.396348834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-652000 -n force-systemd-env-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-652000 -n force-systemd-env-652000: exit status 7 (35.881834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-652000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-652000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-652000
--- FAIL: TestForceSystemdEnv (10.14s)

                                                
                                    
x
+
TestErrorSpam/setup (9.83s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-751000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-751000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 --driver=qemu2 : exit status 80 (9.831705125s)

                                                
                                                
-- stdout --
	* [nospam-751000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-751000" primary control-plane node in "nospam-751000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-751000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-751000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-751000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-751000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-751000] minikube v1.34.0 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19644
- KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-751000" primary control-plane node in "nospam-751000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-751000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-751000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.83s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (10.15s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-893000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-893000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (10.082029042s)

                                                
                                                
-- stdout --
	* [functional-893000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-893000" primary control-plane node in "functional-893000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-893000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51077 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51077 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51077 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-893000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2236: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-893000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2241: start stdout=* [functional-893000] minikube v1.34.0 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19644
- KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-893000" primary control-plane node in "functional-893000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-893000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2246: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51077 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51077 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51077 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-893000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000: exit status 7 (69.852208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-893000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (10.15s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-893000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-893000 --alsologtostderr -v=8: exit status 80 (5.18034125s)

                                                
                                                
-- stdout --
	* [functional-893000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-893000" primary control-plane node in "functional-893000" cluster
	* Restarting existing qemu2 VM for "functional-893000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-893000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:29:45.048553    7306 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:29:45.048704    7306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:29:45.048707    7306 out.go:358] Setting ErrFile to fd 2...
	I0914 23:29:45.048710    7306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:29:45.048839    7306 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:29:45.049856    7306 out.go:352] Setting JSON to false
	I0914 23:29:45.065782    7306 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5354,"bootTime":1726376431,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:29:45.065858    7306 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:29:45.071119    7306 out.go:177] * [functional-893000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:29:45.077080    7306 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:29:45.077136    7306 notify.go:220] Checking for updates...
	I0914 23:29:45.081613    7306 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:29:45.085046    7306 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:29:45.088050    7306 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:29:45.091114    7306 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:29:45.094022    7306 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:29:45.097409    7306 config.go:182] Loaded profile config "functional-893000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:29:45.097463    7306 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:29:45.102001    7306 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 23:29:45.109044    7306 start.go:297] selected driver: qemu2
	I0914 23:29:45.109050    7306 start.go:901] validating driver "qemu2" against &{Name:functional-893000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-893000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:29:45.109113    7306 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:29:45.111404    7306 cni.go:84] Creating CNI manager for ""
	I0914 23:29:45.111437    7306 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:29:45.111488    7306 start.go:340] cluster config:
	{Name:functional-893000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-893000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:29:45.114886    7306 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:29:45.122057    7306 out.go:177] * Starting "functional-893000" primary control-plane node in "functional-893000" cluster
	I0914 23:29:45.126058    7306 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:29:45.126075    7306 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:29:45.126083    7306 cache.go:56] Caching tarball of preloaded images
	I0914 23:29:45.126136    7306 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:29:45.126141    7306 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:29:45.126186    7306 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/functional-893000/config.json ...
	I0914 23:29:45.126648    7306 start.go:360] acquireMachinesLock for functional-893000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:29:45.126677    7306 start.go:364] duration metric: took 21.958µs to acquireMachinesLock for "functional-893000"
	I0914 23:29:45.126685    7306 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:29:45.126690    7306 fix.go:54] fixHost starting: 
	I0914 23:29:45.126799    7306 fix.go:112] recreateIfNeeded on functional-893000: state=Stopped err=<nil>
	W0914 23:29:45.126807    7306 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:29:45.134082    7306 out.go:177] * Restarting existing qemu2 VM for "functional-893000" ...
	I0914 23:29:45.137906    7306 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:29:45.137944    7306 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:b4:ee:f1:e5:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/disk.qcow2
	I0914 23:29:45.139828    7306 main.go:141] libmachine: STDOUT: 
	I0914 23:29:45.139847    7306 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:29:45.139875    7306 fix.go:56] duration metric: took 13.185416ms for fixHost
	I0914 23:29:45.139881    7306 start.go:83] releasing machines lock for "functional-893000", held for 13.200042ms
	W0914 23:29:45.139886    7306 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:29:45.139919    7306 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:29:45.139923    7306 start.go:729] Will try again in 5 seconds ...
	I0914 23:29:50.142129    7306 start.go:360] acquireMachinesLock for functional-893000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:29:50.142524    7306 start.go:364] duration metric: took 297.375µs to acquireMachinesLock for "functional-893000"
	I0914 23:29:50.142646    7306 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:29:50.142664    7306 fix.go:54] fixHost starting: 
	I0914 23:29:50.143367    7306 fix.go:112] recreateIfNeeded on functional-893000: state=Stopped err=<nil>
	W0914 23:29:50.143390    7306 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:29:50.150768    7306 out.go:177] * Restarting existing qemu2 VM for "functional-893000" ...
	I0914 23:29:50.154736    7306 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:29:50.155060    7306 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:b4:ee:f1:e5:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/disk.qcow2
	I0914 23:29:50.163771    7306 main.go:141] libmachine: STDOUT: 
	I0914 23:29:50.163838    7306 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:29:50.163918    7306 fix.go:56] duration metric: took 21.24975ms for fixHost
	I0914 23:29:50.163943    7306 start.go:83] releasing machines lock for "functional-893000", held for 21.393167ms
	W0914 23:29:50.164169    7306 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-893000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-893000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:29:50.171790    7306 out.go:201] 
	W0914 23:29:50.175823    7306 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:29:50.175847    7306 out.go:270] * 
	* 
	W0914 23:29:50.178465    7306 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:29:50.184703    7306 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-893000 --alsologtostderr -v=8": exit status 80
functional_test.go:663: soft start took 5.182157542s for "functional-893000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000: exit status 7 (66.424459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-893000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (32.27ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:687: expected current-context = "functional-893000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000: exit status 7 (31.397667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-893000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-893000 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-893000 get po -A: exit status 1 (25.931875ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-893000

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-893000 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-893000\n"*: args "kubectl --context functional-893000 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-893000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000: exit status 7 (30.63475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-893000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh sudo crictl images
functional_test.go:1124: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh sudo crictl images: exit status 83 (41.805917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test.go:1126: failed to get images by "out/minikube-darwin-arm64 -p functional-893000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1130: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (41.006416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test.go:1150: failed to manually delete image "out/minikube-darwin-arm64 -p functional-893000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (39.846292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (41.830833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test.go:1165: expected "out/minikube-darwin-arm64 -p functional-893000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 kubectl -- --context functional-893000 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 kubectl -- --context functional-893000 get pods: exit status 1 (1.913310584s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-893000
	* no server found for cluster "functional-893000"

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-darwin-arm64 -p functional-893000 kubectl -- --context functional-893000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000: exit status 7 (32.638209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-893000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (1.95s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-893000 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-893000 get pods: exit status 1 (1.01700325s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-893000
	* no server found for cluster "functional-893000"

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-893000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000: exit status 7 (30.912708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-893000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.05s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-893000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-893000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.182165292s)

                                                
                                                
-- stdout --
	* [functional-893000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-893000" primary control-plane node in "functional-893000" cluster
	* Restarting existing qemu2 VM for "functional-893000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-893000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-893000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-893000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:761: restart took 5.182668375s for "functional-893000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000: exit status 7 (66.798959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-893000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-893000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-893000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (30.646583ms)

                                                
                                                
** stderr ** 
	error: context "functional-893000" does not exist

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-893000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000: exit status 7 (29.947125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-893000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 logs
functional_test.go:1236: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 logs: exit status 83 (74.556208ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-312000 | jenkins | v1.34.0 | 14 Sep 24 23:28 PDT |                     |
	|         | -p download-only-312000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 14 Sep 24 23:28 PDT | 14 Sep 24 23:28 PDT |
	| delete  | -p download-only-312000                                                  | download-only-312000 | jenkins | v1.34.0 | 14 Sep 24 23:28 PDT | 14 Sep 24 23:28 PDT |
	| start   | -o=json --download-only                                                  | download-only-074000 | jenkins | v1.34.0 | 14 Sep 24 23:28 PDT |                     |
	|         | -p download-only-074000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
	| delete  | -p download-only-074000                                                  | download-only-074000 | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
	| delete  | -p download-only-312000                                                  | download-only-312000 | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
	| delete  | -p download-only-074000                                                  | download-only-074000 | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
	| start   | --download-only -p                                                       | binary-mirror-368000 | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
	|         | binary-mirror-368000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:51049                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-368000                                                  | binary-mirror-368000 | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
	| addons  | enable dashboard -p                                                      | addons-013000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
	|         | addons-013000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-013000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
	|         | addons-013000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-013000 --wait=true                                             | addons-013000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-013000                                                         | addons-013000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
	| start   | -p nospam-751000 -n=1 --memory=2250 --wait=false                         | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-751000                                                         | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
	| start   | -p functional-893000                                                     | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-893000                                                     | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-893000 cache add                                              | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-893000 cache add                                              | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-893000 cache add                                              | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-893000 cache add                                              | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
	|         | minikube-local-cache-test:functional-893000                              |                      |         |         |                     |                     |
	| cache   | functional-893000 cache delete                                           | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
	|         | minikube-local-cache-test:functional-893000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
	| ssh     | functional-893000 ssh sudo                                               | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-893000                                                        | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-893000 ssh                                                    | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-893000 cache reload                                           | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
	| ssh     | functional-893000 ssh                                                    | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-893000 kubectl --                                             | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
	|         | --context functional-893000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-893000                                                     | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 23:29:56
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 23:29:56.690742    7380 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:29:56.690877    7380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:29:56.690879    7380 out.go:358] Setting ErrFile to fd 2...
	I0914 23:29:56.690881    7380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:29:56.691006    7380 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:29:56.692011    7380 out.go:352] Setting JSON to false
	I0914 23:29:56.707836    7380 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5365,"bootTime":1726376431,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:29:56.707900    7380 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:29:56.713970    7380 out.go:177] * [functional-893000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:29:56.722090    7380 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:29:56.722117    7380 notify.go:220] Checking for updates...
	I0914 23:29:56.730065    7380 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:29:56.733973    7380 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:29:56.737093    7380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:29:56.740084    7380 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:29:56.743181    7380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:29:56.746412    7380 config.go:182] Loaded profile config "functional-893000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:29:56.746463    7380 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:29:56.751201    7380 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 23:29:56.758040    7380 start.go:297] selected driver: qemu2
	I0914 23:29:56.758044    7380 start.go:901] validating driver "qemu2" against &{Name:functional-893000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-893000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:29:56.758090    7380 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:29:56.760344    7380 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:29:56.760365    7380 cni.go:84] Creating CNI manager for ""
	I0914 23:29:56.760393    7380 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:29:56.760445    7380 start.go:340] cluster config:
	{Name:functional-893000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-893000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:29:56.764022    7380 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:29:56.771137    7380 out.go:177] * Starting "functional-893000" primary control-plane node in "functional-893000" cluster
	I0914 23:29:56.775093    7380 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:29:56.775105    7380 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:29:56.775117    7380 cache.go:56] Caching tarball of preloaded images
	I0914 23:29:56.775178    7380 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:29:56.775181    7380 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:29:56.775239    7380 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/functional-893000/config.json ...
	I0914 23:29:56.775658    7380 start.go:360] acquireMachinesLock for functional-893000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:29:56.775690    7380 start.go:364] duration metric: took 27.375µs to acquireMachinesLock for "functional-893000"
	I0914 23:29:56.775696    7380 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:29:56.775699    7380 fix.go:54] fixHost starting: 
	I0914 23:29:56.775809    7380 fix.go:112] recreateIfNeeded on functional-893000: state=Stopped err=<nil>
	W0914 23:29:56.775815    7380 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:29:56.783102    7380 out.go:177] * Restarting existing qemu2 VM for "functional-893000" ...
	I0914 23:29:56.787089    7380 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:29:56.787123    7380 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:b4:ee:f1:e5:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/disk.qcow2
	I0914 23:29:56.789049    7380 main.go:141] libmachine: STDOUT: 
	I0914 23:29:56.789063    7380 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:29:56.789092    7380 fix.go:56] duration metric: took 13.392625ms for fixHost
	I0914 23:29:56.789095    7380 start.go:83] releasing machines lock for "functional-893000", held for 13.403416ms
	W0914 23:29:56.789100    7380 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:29:56.789143    7380 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:29:56.789148    7380 start.go:729] Will try again in 5 seconds ...
	I0914 23:30:01.791208    7380 start.go:360] acquireMachinesLock for functional-893000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:30:01.791571    7380 start.go:364] duration metric: took 295.042µs to acquireMachinesLock for "functional-893000"
	I0914 23:30:01.791708    7380 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:30:01.791716    7380 fix.go:54] fixHost starting: 
	I0914 23:30:01.792144    7380 fix.go:112] recreateIfNeeded on functional-893000: state=Stopped err=<nil>
	W0914 23:30:01.792155    7380 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:30:01.800538    7380 out.go:177] * Restarting existing qemu2 VM for "functional-893000" ...
	I0914 23:30:01.805461    7380 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:30:01.805602    7380 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:b4:ee:f1:e5:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/disk.qcow2
	I0914 23:30:01.811174    7380 main.go:141] libmachine: STDOUT: 
	I0914 23:30:01.811245    7380 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:30:01.811301    7380 fix.go:56] duration metric: took 19.587041ms for fixHost
	I0914 23:30:01.811311    7380 start.go:83] releasing machines lock for "functional-893000", held for 19.695542ms
	W0914 23:30:01.811465    7380 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-893000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:30:01.818498    7380 out.go:201] 
	W0914 23:30:01.822635    7380 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:30:01.822659    7380 out.go:270] * 
	W0914 23:30:01.824037    7380 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:30:01.833570    7380 out.go:201] 
	
	
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test.go:1238: out/minikube-darwin-arm64 -p functional-893000 logs failed: exit status 83
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-312000 | jenkins | v1.34.0 | 14 Sep 24 23:28 PDT |                     |
|         | -p download-only-312000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 14 Sep 24 23:28 PDT | 14 Sep 24 23:28 PDT |
| delete  | -p download-only-312000                                                  | download-only-312000 | jenkins | v1.34.0 | 14 Sep 24 23:28 PDT | 14 Sep 24 23:28 PDT |
| start   | -o=json --download-only                                                  | download-only-074000 | jenkins | v1.34.0 | 14 Sep 24 23:28 PDT |                     |
|         | -p download-only-074000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
| delete  | -p download-only-074000                                                  | download-only-074000 | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
| delete  | -p download-only-312000                                                  | download-only-312000 | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
| delete  | -p download-only-074000                                                  | download-only-074000 | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
| start   | --download-only -p                                                       | binary-mirror-368000 | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | binary-mirror-368000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51049                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-368000                                                  | binary-mirror-368000 | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
| addons  | enable dashboard -p                                                      | addons-013000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | addons-013000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-013000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | addons-013000                                                            |                      |         |         |                     |                     |
| start   | -p addons-013000 --wait=true                                             | addons-013000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-013000                                                         | addons-013000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
| start   | -p nospam-751000 -n=1 --memory=2250 --wait=false                         | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-751000                                                         | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
| start   | -p functional-893000                                                     | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-893000                                                     | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-893000 cache add                                              | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-893000 cache add                                              | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-893000 cache add                                              | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-893000 cache add                                              | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
|         | minikube-local-cache-test:functional-893000                              |                      |         |         |                     |                     |
| cache   | functional-893000 cache delete                                           | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
|         | minikube-local-cache-test:functional-893000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
| ssh     | functional-893000 ssh sudo                                               | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-893000                                                        | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-893000 ssh                                                    | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-893000 cache reload                                           | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
| ssh     | functional-893000 ssh                                                    | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-893000 kubectl --                                             | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | --context functional-893000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-893000                                                     | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/09/14 23:29:56
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.0 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0914 23:29:56.690742    7380 out.go:345] Setting OutFile to fd 1 ...
I0914 23:29:56.690877    7380 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 23:29:56.690879    7380 out.go:358] Setting ErrFile to fd 2...
I0914 23:29:56.690881    7380 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 23:29:56.691006    7380 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
I0914 23:29:56.692011    7380 out.go:352] Setting JSON to false
I0914 23:29:56.707836    7380 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5365,"bootTime":1726376431,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0914 23:29:56.707900    7380 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0914 23:29:56.713970    7380 out.go:177] * [functional-893000] minikube v1.34.0 on Darwin 14.5 (arm64)
I0914 23:29:56.722090    7380 out.go:177]   - MINIKUBE_LOCATION=19644
I0914 23:29:56.722117    7380 notify.go:220] Checking for updates...
I0914 23:29:56.730065    7380 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
I0914 23:29:56.733973    7380 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0914 23:29:56.737093    7380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0914 23:29:56.740084    7380 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
I0914 23:29:56.743181    7380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0914 23:29:56.746412    7380 config.go:182] Loaded profile config "functional-893000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 23:29:56.746463    7380 driver.go:394] Setting default libvirt URI to qemu:///system
I0914 23:29:56.751201    7380 out.go:177] * Using the qemu2 driver based on existing profile
I0914 23:29:56.758040    7380 start.go:297] selected driver: qemu2
I0914 23:29:56.758044    7380 start.go:901] validating driver "qemu2" against &{Name:functional-893000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-893000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0914 23:29:56.758090    7380 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0914 23:29:56.760344    7380 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0914 23:29:56.760365    7380 cni.go:84] Creating CNI manager for ""
I0914 23:29:56.760393    7380 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0914 23:29:56.760445    7380 start.go:340] cluster config:
{Name:functional-893000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-893000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0914 23:29:56.764022    7380 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0914 23:29:56.771137    7380 out.go:177] * Starting "functional-893000" primary control-plane node in "functional-893000" cluster
I0914 23:29:56.775093    7380 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0914 23:29:56.775105    7380 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0914 23:29:56.775117    7380 cache.go:56] Caching tarball of preloaded images
I0914 23:29:56.775178    7380 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0914 23:29:56.775181    7380 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0914 23:29:56.775239    7380 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/functional-893000/config.json ...
I0914 23:29:56.775658    7380 start.go:360] acquireMachinesLock for functional-893000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0914 23:29:56.775690    7380 start.go:364] duration metric: took 27.375µs to acquireMachinesLock for "functional-893000"
I0914 23:29:56.775696    7380 start.go:96] Skipping create...Using existing machine configuration
I0914 23:29:56.775699    7380 fix.go:54] fixHost starting: 
I0914 23:29:56.775809    7380 fix.go:112] recreateIfNeeded on functional-893000: state=Stopped err=<nil>
W0914 23:29:56.775815    7380 fix.go:138] unexpected machine state, will restart: <nil>
I0914 23:29:56.783102    7380 out.go:177] * Restarting existing qemu2 VM for "functional-893000" ...
I0914 23:29:56.787089    7380 qemu.go:418] Using hvf for hardware acceleration
I0914 23:29:56.787123    7380 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:b4:ee:f1:e5:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/disk.qcow2
I0914 23:29:56.789049    7380 main.go:141] libmachine: STDOUT: 
I0914 23:29:56.789063    7380 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0914 23:29:56.789092    7380 fix.go:56] duration metric: took 13.392625ms for fixHost
I0914 23:29:56.789095    7380 start.go:83] releasing machines lock for "functional-893000", held for 13.403416ms
W0914 23:29:56.789100    7380 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0914 23:29:56.789143    7380 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0914 23:29:56.789148    7380 start.go:729] Will try again in 5 seconds ...
I0914 23:30:01.791208    7380 start.go:360] acquireMachinesLock for functional-893000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0914 23:30:01.791571    7380 start.go:364] duration metric: took 295.042µs to acquireMachinesLock for "functional-893000"
I0914 23:30:01.791708    7380 start.go:96] Skipping create...Using existing machine configuration
I0914 23:30:01.791716    7380 fix.go:54] fixHost starting: 
I0914 23:30:01.792144    7380 fix.go:112] recreateIfNeeded on functional-893000: state=Stopped err=<nil>
W0914 23:30:01.792155    7380 fix.go:138] unexpected machine state, will restart: <nil>
I0914 23:30:01.800538    7380 out.go:177] * Restarting existing qemu2 VM for "functional-893000" ...
I0914 23:30:01.805461    7380 qemu.go:418] Using hvf for hardware acceleration
I0914 23:30:01.805602    7380 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:b4:ee:f1:e5:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/disk.qcow2
I0914 23:30:01.811174    7380 main.go:141] libmachine: STDOUT: 
I0914 23:30:01.811245    7380 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0914 23:30:01.811301    7380 fix.go:56] duration metric: took 19.587041ms for fixHost
I0914 23:30:01.811311    7380 start.go:83] releasing machines lock for "functional-893000", held for 19.695542ms
W0914 23:30:01.811465    7380 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-893000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0914 23:30:01.818498    7380 out.go:201] 
W0914 23:30:01.822635    7380 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0914 23:30:01.822659    7380 out.go:270] * 
W0914 23:30:01.824037    7380 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0914 23:30:01.833570    7380 out.go:201] 

                                                
                                                

                                                
                                                
* The control-plane node functional-893000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-893000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd3280894048/001/logs.txt
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-312000 | jenkins | v1.34.0 | 14 Sep 24 23:28 PDT |                     |
|         | -p download-only-312000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 14 Sep 24 23:28 PDT | 14 Sep 24 23:28 PDT |
| delete  | -p download-only-312000                                                  | download-only-312000 | jenkins | v1.34.0 | 14 Sep 24 23:28 PDT | 14 Sep 24 23:28 PDT |
| start   | -o=json --download-only                                                  | download-only-074000 | jenkins | v1.34.0 | 14 Sep 24 23:28 PDT |                     |
|         | -p download-only-074000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
| delete  | -p download-only-074000                                                  | download-only-074000 | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
| delete  | -p download-only-312000                                                  | download-only-312000 | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
| delete  | -p download-only-074000                                                  | download-only-074000 | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
| start   | --download-only -p                                                       | binary-mirror-368000 | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | binary-mirror-368000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51049                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-368000                                                  | binary-mirror-368000 | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
| addons  | enable dashboard -p                                                      | addons-013000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | addons-013000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-013000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | addons-013000                                                            |                      |         |         |                     |                     |
| start   | -p addons-013000 --wait=true                                             | addons-013000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-013000                                                         | addons-013000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
| start   | -p nospam-751000 -n=1 --memory=2250 --wait=false                         | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-751000 --log_dir                                                  | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-751000                                                         | nospam-751000        | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
| start   | -p functional-893000                                                     | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-893000                                                     | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-893000 cache add                                              | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-893000 cache add                                              | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-893000 cache add                                              | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-893000 cache add                                              | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
|         | minikube-local-cache-test:functional-893000                              |                      |         |         |                     |                     |
| cache   | functional-893000 cache delete                                           | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
|         | minikube-local-cache-test:functional-893000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
| ssh     | functional-893000 ssh sudo                                               | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-893000                                                        | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-893000 ssh                                                    | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-893000 cache reload                                           | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
| ssh     | functional-893000 ssh                                                    | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT | 14 Sep 24 23:29 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-893000 kubectl --                                             | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | --context functional-893000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-893000                                                     | functional-893000    | jenkins | v1.34.0 | 14 Sep 24 23:29 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/09/14 23:29:56
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.0 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0914 23:29:56.690742    7380 out.go:345] Setting OutFile to fd 1 ...
I0914 23:29:56.690877    7380 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 23:29:56.690879    7380 out.go:358] Setting ErrFile to fd 2...
I0914 23:29:56.690881    7380 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 23:29:56.691006    7380 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
I0914 23:29:56.692011    7380 out.go:352] Setting JSON to false
I0914 23:29:56.707836    7380 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5365,"bootTime":1726376431,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0914 23:29:56.707900    7380 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0914 23:29:56.713970    7380 out.go:177] * [functional-893000] minikube v1.34.0 on Darwin 14.5 (arm64)
I0914 23:29:56.722090    7380 out.go:177]   - MINIKUBE_LOCATION=19644
I0914 23:29:56.722117    7380 notify.go:220] Checking for updates...
I0914 23:29:56.730065    7380 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
I0914 23:29:56.733973    7380 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0914 23:29:56.737093    7380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0914 23:29:56.740084    7380 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
I0914 23:29:56.743181    7380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0914 23:29:56.746412    7380 config.go:182] Loaded profile config "functional-893000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 23:29:56.746463    7380 driver.go:394] Setting default libvirt URI to qemu:///system
I0914 23:29:56.751201    7380 out.go:177] * Using the qemu2 driver based on existing profile
I0914 23:29:56.758040    7380 start.go:297] selected driver: qemu2
I0914 23:29:56.758044    7380 start.go:901] validating driver "qemu2" against &{Name:functional-893000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-893000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0914 23:29:56.758090    7380 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0914 23:29:56.760344    7380 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0914 23:29:56.760365    7380 cni.go:84] Creating CNI manager for ""
I0914 23:29:56.760393    7380 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0914 23:29:56.760445    7380 start.go:340] cluster config:
{Name:functional-893000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-893000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0914 23:29:56.764022    7380 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0914 23:29:56.771137    7380 out.go:177] * Starting "functional-893000" primary control-plane node in "functional-893000" cluster
I0914 23:29:56.775093    7380 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0914 23:29:56.775105    7380 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0914 23:29:56.775117    7380 cache.go:56] Caching tarball of preloaded images
I0914 23:29:56.775178    7380 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0914 23:29:56.775181    7380 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0914 23:29:56.775239    7380 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/functional-893000/config.json ...
I0914 23:29:56.775658    7380 start.go:360] acquireMachinesLock for functional-893000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0914 23:29:56.775690    7380 start.go:364] duration metric: took 27.375µs to acquireMachinesLock for "functional-893000"
I0914 23:29:56.775696    7380 start.go:96] Skipping create...Using existing machine configuration
I0914 23:29:56.775699    7380 fix.go:54] fixHost starting: 
I0914 23:29:56.775809    7380 fix.go:112] recreateIfNeeded on functional-893000: state=Stopped err=<nil>
W0914 23:29:56.775815    7380 fix.go:138] unexpected machine state, will restart: <nil>
I0914 23:29:56.783102    7380 out.go:177] * Restarting existing qemu2 VM for "functional-893000" ...
I0914 23:29:56.787089    7380 qemu.go:418] Using hvf for hardware acceleration
I0914 23:29:56.787123    7380 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:b4:ee:f1:e5:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/disk.qcow2
I0914 23:29:56.789049    7380 main.go:141] libmachine: STDOUT: 
I0914 23:29:56.789063    7380 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0914 23:29:56.789092    7380 fix.go:56] duration metric: took 13.392625ms for fixHost
I0914 23:29:56.789095    7380 start.go:83] releasing machines lock for "functional-893000", held for 13.403416ms
W0914 23:29:56.789100    7380 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0914 23:29:56.789143    7380 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0914 23:29:56.789148    7380 start.go:729] Will try again in 5 seconds ...
I0914 23:30:01.791208    7380 start.go:360] acquireMachinesLock for functional-893000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0914 23:30:01.791571    7380 start.go:364] duration metric: took 295.042µs to acquireMachinesLock for "functional-893000"
I0914 23:30:01.791708    7380 start.go:96] Skipping create...Using existing machine configuration
I0914 23:30:01.791716    7380 fix.go:54] fixHost starting: 
I0914 23:30:01.792144    7380 fix.go:112] recreateIfNeeded on functional-893000: state=Stopped err=<nil>
W0914 23:30:01.792155    7380 fix.go:138] unexpected machine state, will restart: <nil>
I0914 23:30:01.800538    7380 out.go:177] * Restarting existing qemu2 VM for "functional-893000" ...
I0914 23:30:01.805461    7380 qemu.go:418] Using hvf for hardware acceleration
I0914 23:30:01.805602    7380 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:b4:ee:f1:e5:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/functional-893000/disk.qcow2
I0914 23:30:01.811174    7380 main.go:141] libmachine: STDOUT: 
I0914 23:30:01.811245    7380 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0914 23:30:01.811301    7380 fix.go:56] duration metric: took 19.587041ms for fixHost
I0914 23:30:01.811311    7380 start.go:83] releasing machines lock for "functional-893000", held for 19.695542ms
W0914 23:30:01.811465    7380 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-893000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0914 23:30:01.818498    7380 out.go:201] 
W0914 23:30:01.822635    7380 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0914 23:30:01.822659    7380 out.go:270] * 
W0914 23:30:01.824037    7380 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0914 23:30:01.833570    7380 out.go:201] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-893000 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-893000 apply -f testdata/invalidsvc.yaml: exit status 1 (26.283917ms)

                                                
                                                
** stderr ** 
	error: context "functional-893000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-893000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-893000 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-893000 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-893000 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-893000 --alsologtostderr -v=1] stderr:
I0914 23:30:39.957115    7861 out.go:345] Setting OutFile to fd 1 ...
I0914 23:30:39.957517    7861 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 23:30:39.957521    7861 out.go:358] Setting ErrFile to fd 2...
I0914 23:30:39.957524    7861 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 23:30:39.957674    7861 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
I0914 23:30:39.957907    7861 mustload.go:65] Loading cluster: functional-893000
I0914 23:30:39.958117    7861 config.go:182] Loaded profile config "functional-893000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 23:30:39.962911    7861 out.go:177] * The control-plane node functional-893000 host is not running: state=Stopped
I0914 23:30:39.966884    7861 out.go:177]   To start a cluster, run: "minikube start -p functional-893000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000: exit status 7 (42.459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-893000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 status
functional_test.go:854: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 status: exit status 7 (75.0075ms)

                                                
                                                
-- stdout --
	functional-893000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:856: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-893000 status" : exit status 7
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (33.96925ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-893000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 status -o json: exit status 7 (30.62625ms)

                                                
                                                
-- stdout --
	{"Name":"functional-893000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-893000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000: exit status 7 (30.8505ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-893000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-893000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1627: (dbg) Non-zero exit: kubectl --context functional-893000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (25.699541ms)

                                                
                                                
** stderr ** 
	error: context "functional-893000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-893000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-893000 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-893000 describe po hello-node-connect: exit status 1 (25.578625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-893000

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-893000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-893000 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-893000 logs -l app=hello-node-connect: exit status 1 (25.530375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-893000

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-893000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-893000 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-893000 describe svc hello-node-connect: exit status 1 (25.421083ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-893000

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-893000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000: exit status 7 (30.073833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-893000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-893000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000: exit status 7 (29.68125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-893000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "echo hello"
functional_test.go:1725: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "echo hello": exit status 83 (45.701542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test.go:1730: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-893000 ssh \"echo hello\"" : exit status 83
functional_test.go:1734: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-893000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-893000\"\n"*. args "out/minikube-darwin-arm64 -p functional-893000 ssh \"echo hello\""
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "cat /etc/hostname": exit status 83 (38.894917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test.go:1748: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-893000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1752: expected minikube ssh command output to be -"functional-893000"- but got *"* The control-plane node functional-893000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-893000\"\n"*. args "out/minikube-darwin-arm64 -p functional-893000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000: exit status 7 (36.045292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-893000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (54.394625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-893000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh -n functional-893000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh -n functional-893000 "sudo cat /home/docker/cp-test.txt": exit status 83 (43.771041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-893000 ssh -n functional-893000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-893000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-893000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 cp functional-893000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd313090048/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 cp functional-893000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd313090048/001/cp-test.txt: exit status 83 (50.476875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-893000 cp functional-893000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd313090048/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh -n functional-893000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh -n functional-893000 "sudo cat /home/docker/cp-test.txt": exit status 83 (47.10675ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-893000 ssh -n functional-893000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd313090048/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control-plane node functional-893000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-893000\"\n",
+ 	"",
)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (46.709625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-893000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh -n functional-893000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh -n functional-893000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (52.942416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-893000 ssh -n functional-893000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-893000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-893000\"\n",
}, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7093/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "sudo cat /etc/test/nested/copy/7093/hosts"
functional_test.go:1931: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "sudo cat /etc/test/nested/copy/7093/hosts": exit status 83 (42.216416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test.go:1933: out/minikube-darwin-arm64 -p functional-893000 ssh "sudo cat /etc/test/nested/copy/7093/hosts" failed: exit status 83
functional_test.go:1936: file sync test content: * The control-plane node functional-893000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-893000"
functional_test.go:1946: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-893000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-893000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000: exit status 7 (29.684167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-893000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7093.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "sudo cat /etc/ssl/certs/7093.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "sudo cat /etc/ssl/certs/7093.pem": exit status 83 (46.497ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/7093.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-893000 ssh \"sudo cat /etc/ssl/certs/7093.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/7093.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-893000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-893000"
	"""
)
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7093.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "sudo cat /usr/share/ca-certificates/7093.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "sudo cat /usr/share/ca-certificates/7093.pem": exit status 83 (41.522542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/usr/share/ca-certificates/7093.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-893000 ssh \"sudo cat /usr/share/ca-certificates/7093.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/7093.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-893000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-893000"
	"""
)
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (39.595917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-893000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-893000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-893000"
	"""
)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/70932.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "sudo cat /etc/ssl/certs/70932.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "sudo cat /etc/ssl/certs/70932.pem": exit status 83 (47.730125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/70932.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-893000 ssh \"sudo cat /etc/ssl/certs/70932.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/70932.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-893000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-893000"
	"""
)
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/70932.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "sudo cat /usr/share/ca-certificates/70932.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "sudo cat /usr/share/ca-certificates/70932.pem": exit status 83 (44.5095ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/usr/share/ca-certificates/70932.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-893000 ssh \"sudo cat /usr/share/ca-certificates/70932.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/70932.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-893000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-893000"
	"""
)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (41.226042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-893000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-893000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-893000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000: exit status 7 (30.883958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-893000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-893000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-893000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.554875ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-893000

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-893000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-893000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-893000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-893000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-893000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-893000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-893000 -n functional-893000: exit status 7 (30.662042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-893000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "sudo systemctl is-active crio": exit status 83 (47.136542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test.go:2030: output of 
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2033: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-893000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-893000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-893000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-893000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0914 23:30:02.485426    7695 out.go:345] Setting OutFile to fd 1 ...
I0914 23:30:02.485594    7695 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 23:30:02.485597    7695 out.go:358] Setting ErrFile to fd 2...
I0914 23:30:02.485600    7695 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 23:30:02.485734    7695 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
I0914 23:30:02.485943    7695 mustload.go:65] Loading cluster: functional-893000
I0914 23:30:02.486161    7695 config.go:182] Loaded profile config "functional-893000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 23:30:02.489943    7695 out.go:177] * The control-plane node functional-893000 host is not running: state=Stopped
I0914 23:30:02.500962    7695 out.go:177]   To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
stdout: * The control-plane node functional-893000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-893000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-893000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7696: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-893000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-893000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-893000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-893000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-893000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-893000": client config: context "functional-893000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (115.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-893000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-893000 get svc nginx-svc: exit status 1 (69.086458ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-893000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-893000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (115.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-893000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1437: (dbg) Non-zero exit: kubectl --context functional-893000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (25.912ms)

                                                
                                                
** stderr ** 
	error: context "functional-893000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-893000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 service list: exit status 83 (43.017917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test.go:1461: failed to do service list. args "out/minikube-darwin-arm64 -p functional-893000 service list" : exit status 83
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-893000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-893000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 service list -o json: exit status 83 (41.874459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test.go:1491: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-893000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 service --namespace=default --https --url hello-node: exit status 83 (42.570708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test.go:1511: failed to get service url. args "out/minikube-darwin-arm64 -p functional-893000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 service hello-node --url --format={{.IP}}: exit status 83 (40.948792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-893000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1548: "* The control-plane node functional-893000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-893000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 service hello-node --url: exit status 83 (41.722708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test.go:1561: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-893000 service hello-node --url": exit status 83
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-893000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-893000"
functional_test.go:1569: failed to parse "* The control-plane node functional-893000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-893000\"": parse "* The control-plane node functional-893000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-893000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 version -o=json --components
functional_test.go:2270: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 version -o=json --components: exit status 83 (41.919958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test.go:2272: error version: exit status 83
functional_test.go:2277: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-893000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-893000"
functional_test.go:2277: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-893000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-893000"
functional_test.go:2277: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-893000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-893000"
functional_test.go:2277: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-893000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-893000"
functional_test.go:2277: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-893000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-893000"
functional_test.go:2277: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-893000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-893000"
functional_test.go:2277: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-893000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-893000"
functional_test.go:2277: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-893000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-893000"
functional_test.go:2277: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-893000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-893000"
functional_test.go:2277: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-893000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-893000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-893000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-893000 image ls --format short --alsologtostderr:
I0914 23:30:44.871183    7982 out.go:345] Setting OutFile to fd 1 ...
I0914 23:30:44.871348    7982 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 23:30:44.871351    7982 out.go:358] Setting ErrFile to fd 2...
I0914 23:30:44.871354    7982 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 23:30:44.871479    7982 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
I0914 23:30:44.871946    7982 config.go:182] Loaded profile config "functional-893000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 23:30:44.872012    7982 config.go:182] Loaded profile config "functional-893000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-893000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-893000 image ls --format table --alsologtostderr:
I0914 23:30:45.093938    7997 out.go:345] Setting OutFile to fd 1 ...
I0914 23:30:45.094060    7997 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 23:30:45.094063    7997 out.go:358] Setting ErrFile to fd 2...
I0914 23:30:45.094066    7997 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 23:30:45.094181    7997 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
I0914 23:30:45.094577    7997 config.go:182] Loaded profile config "functional-893000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 23:30:45.094640    7997 config.go:182] Loaded profile config "functional-893000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-893000 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-893000 image ls --format json --alsologtostderr:
I0914 23:30:45.059818    7995 out.go:345] Setting OutFile to fd 1 ...
I0914 23:30:45.059944    7995 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 23:30:45.059947    7995 out.go:358] Setting ErrFile to fd 2...
I0914 23:30:45.059950    7995 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 23:30:45.060073    7995 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
I0914 23:30:45.060476    7995 config.go:182] Loaded profile config "functional-893000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 23:30:45.060535    7995 config.go:182] Loaded profile config "functional-893000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-893000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-893000 image ls --format yaml --alsologtostderr:
I0914 23:30:44.907535    7984 out.go:345] Setting OutFile to fd 1 ...
I0914 23:30:44.907685    7984 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 23:30:44.907689    7984 out.go:358] Setting ErrFile to fd 2...
I0914 23:30:44.907691    7984 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 23:30:44.907817    7984 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
I0914 23:30:44.908289    7984 config.go:182] Loaded profile config "functional-893000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 23:30:44.908366    7984 config.go:182] Loaded profile config "functional-893000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh pgrep buildkitd: exit status 83 (42.753125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 image build -t localhost/my-image:functional-893000 testdata/build --alsologtostderr
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-893000 image build -t localhost/my-image:functional-893000 testdata/build --alsologtostderr:
I0914 23:30:44.987370    7991 out.go:345] Setting OutFile to fd 1 ...
I0914 23:30:44.987680    7991 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 23:30:44.987684    7991 out.go:358] Setting ErrFile to fd 2...
I0914 23:30:44.987686    7991 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 23:30:44.987817    7991 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
I0914 23:30:44.988214    7991 config.go:182] Loaded profile config "functional-893000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 23:30:44.988629    7991 config.go:182] Loaded profile config "functional-893000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 23:30:44.988863    7991 build_images.go:133] succeeded building to: 
I0914 23:30:44.988867    7991 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 image ls
functional_test.go:446: expected "localhost/my-image:functional-893000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 image load --daemon kicbase/echo-server:functional-893000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-893000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 image load --daemon kicbase/echo-server:functional-893000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-893000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-893000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 image load --daemon kicbase/echo-server:functional-893000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-893000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 image save kicbase/echo-server:functional-893000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-893000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-893000 docker-env) && out/minikube-darwin-arm64 status -p functional-893000"
functional_test.go:499: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-893000 docker-env) && out/minikube-darwin-arm64 status -p functional-893000": exit status 1 (47.290208ms)
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 update-context --alsologtostderr -v=2: exit status 83 (39.616041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:30:45.129439    7999 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:30:45.129840    7999 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:30:45.129843    7999 out.go:358] Setting ErrFile to fd 2...
	I0914 23:30:45.129846    7999 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:30:45.129975    7999 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:30:45.130167    7999 mustload.go:65] Loading cluster: functional-893000
	I0914 23:30:45.130351    7999 config.go:182] Loaded profile config "functional-893000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:30:45.133694    7999 out.go:177] * The control-plane node functional-893000 host is not running: state=Stopped
	I0914 23:30:45.136670    7999 out.go:177]   To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-893000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-893000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-893000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 update-context --alsologtostderr -v=2: exit status 83 (40.408708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:30:45.211879    8003 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:30:45.212016    8003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:30:45.212020    8003 out.go:358] Setting ErrFile to fd 2...
	I0914 23:30:45.212022    8003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:30:45.212161    8003 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:30:45.212358    8003 mustload.go:65] Loading cluster: functional-893000
	I0914 23:30:45.212567    8003 config.go:182] Loaded profile config "functional-893000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:30:45.216649    8003 out.go:177] * The control-plane node functional-893000 host is not running: state=Stopped
	I0914 23:30:45.220679    8003 out.go:177]   To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-893000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-893000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-893000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 update-context --alsologtostderr -v=2: exit status 83 (41.53575ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:30:45.169559    8001 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:30:45.169690    8001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:30:45.169693    8001 out.go:358] Setting ErrFile to fd 2...
	I0914 23:30:45.169696    8001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:30:45.169837    8001 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:30:45.170033    8001 mustload.go:65] Loading cluster: functional-893000
	I0914 23:30:45.170230    8001 config.go:182] Loaded profile config "functional-893000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:30:45.174693    8001 out.go:177] * The control-plane node functional-893000 host is not running: state=Stopped
	I0914 23:30:45.178690    8001 out.go:177]   To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-893000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-893000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-893000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.035540583s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 14 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": dial tcp: lookup nginx-svc.default.svc.cluster.local. on 8.8.8.8:53: read udp 207.254.73.72:57815->10.96.0.10:53: i/o timeout
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-603000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-603000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (10.034016541s)

                                                
                                                
-- stdout --
	* [ha-603000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-603000" primary control-plane node in "ha-603000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-603000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:32:53.269372    8046 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:32:53.269506    8046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:32:53.269509    8046 out.go:358] Setting ErrFile to fd 2...
	I0914 23:32:53.269512    8046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:32:53.269647    8046 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:32:53.270696    8046 out.go:352] Setting JSON to false
	I0914 23:32:53.286802    8046 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5542,"bootTime":1726376431,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:32:53.286870    8046 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:32:53.293578    8046 out.go:177] * [ha-603000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:32:53.301729    8046 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:32:53.301789    8046 notify.go:220] Checking for updates...
	I0914 23:32:53.308650    8046 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:32:53.311699    8046 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:32:53.314663    8046 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:32:53.317670    8046 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:32:53.320679    8046 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:32:53.322083    8046 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:32:53.326625    8046 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 23:32:53.333640    8046 start.go:297] selected driver: qemu2
	I0914 23:32:53.333648    8046 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:32:53.333657    8046 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:32:53.335992    8046 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:32:53.338719    8046 out.go:177] * Automatically selected the socket_vmnet network
	I0914 23:32:53.341723    8046 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:32:53.341739    8046 cni.go:84] Creating CNI manager for ""
	I0914 23:32:53.341756    8046 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0914 23:32:53.341759    8046 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 23:32:53.341786    8046 start.go:340] cluster config:
	{Name:ha-603000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:32:53.345331    8046 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:32:53.353669    8046 out.go:177] * Starting "ha-603000" primary control-plane node in "ha-603000" cluster
	I0914 23:32:53.357683    8046 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:32:53.357699    8046 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:32:53.357720    8046 cache.go:56] Caching tarball of preloaded images
	I0914 23:32:53.357785    8046 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:32:53.357798    8046 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:32:53.357998    8046 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/ha-603000/config.json ...
	I0914 23:32:53.358011    8046 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/ha-603000/config.json: {Name:mk8c7b46d603e301673f18ad9da1536b1b97fc61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:32:53.358342    8046 start.go:360] acquireMachinesLock for ha-603000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:32:53.358392    8046 start.go:364] duration metric: took 44.916µs to acquireMachinesLock for "ha-603000"
	I0914 23:32:53.358402    8046 start.go:93] Provisioning new machine with config: &{Name:ha-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:32:53.358435    8046 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:32:53.366719    8046 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 23:32:53.383981    8046 start.go:159] libmachine.API.Create for "ha-603000" (driver="qemu2")
	I0914 23:32:53.384013    8046 client.go:168] LocalClient.Create starting
	I0914 23:32:53.384086    8046 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:32:53.384114    8046 main.go:141] libmachine: Decoding PEM data...
	I0914 23:32:53.384124    8046 main.go:141] libmachine: Parsing certificate...
	I0914 23:32:53.384161    8046 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:32:53.384184    8046 main.go:141] libmachine: Decoding PEM data...
	I0914 23:32:53.384193    8046 main.go:141] libmachine: Parsing certificate...
	I0914 23:32:53.384718    8046 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:32:53.564324    8046 main.go:141] libmachine: Creating SSH key...
	I0914 23:32:53.756002    8046 main.go:141] libmachine: Creating Disk image...
	I0914 23:32:53.756009    8046 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:32:53.756266    8046 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/disk.qcow2
	I0914 23:32:53.766270    8046 main.go:141] libmachine: STDOUT: 
	I0914 23:32:53.766296    8046 main.go:141] libmachine: STDERR: 
	I0914 23:32:53.766358    8046 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/disk.qcow2 +20000M
	I0914 23:32:53.774276    8046 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:32:53.774293    8046 main.go:141] libmachine: STDERR: 
	I0914 23:32:53.774306    8046 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/disk.qcow2
	I0914 23:32:53.774309    8046 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:32:53.774318    8046 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:32:53.774345    8046 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:cf:02:2b:aa:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/disk.qcow2
	I0914 23:32:53.775981    8046 main.go:141] libmachine: STDOUT: 
	I0914 23:32:53.775996    8046 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:32:53.776017    8046 client.go:171] duration metric: took 392.005ms to LocalClient.Create
	I0914 23:32:55.778122    8046 start.go:128] duration metric: took 2.419717209s to createHost
	I0914 23:32:55.778175    8046 start.go:83] releasing machines lock for "ha-603000", held for 2.419821417s
	W0914 23:32:55.778200    8046 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:32:55.788775    8046 out.go:177] * Deleting "ha-603000" in qemu2 ...
	W0914 23:32:55.824597    8046 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:32:55.824622    8046 start.go:729] Will try again in 5 seconds ...
	I0914 23:33:00.826788    8046 start.go:360] acquireMachinesLock for ha-603000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:33:00.827264    8046 start.go:364] duration metric: took 372.542µs to acquireMachinesLock for "ha-603000"
	I0914 23:33:00.827398    8046 start.go:93] Provisioning new machine with config: &{Name:ha-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:33:00.827623    8046 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:33:00.849421    8046 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 23:33:00.900607    8046 start.go:159] libmachine.API.Create for "ha-603000" (driver="qemu2")
	I0914 23:33:00.900663    8046 client.go:168] LocalClient.Create starting
	I0914 23:33:00.900794    8046 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:33:00.900862    8046 main.go:141] libmachine: Decoding PEM data...
	I0914 23:33:00.900883    8046 main.go:141] libmachine: Parsing certificate...
	I0914 23:33:00.900943    8046 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:33:00.900989    8046 main.go:141] libmachine: Decoding PEM data...
	I0914 23:33:00.901003    8046 main.go:141] libmachine: Parsing certificate...
	I0914 23:33:00.901527    8046 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:33:01.072596    8046 main.go:141] libmachine: Creating SSH key...
	I0914 23:33:01.201989    8046 main.go:141] libmachine: Creating Disk image...
	I0914 23:33:01.201995    8046 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:33:01.202263    8046 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/disk.qcow2
	I0914 23:33:01.211795    8046 main.go:141] libmachine: STDOUT: 
	I0914 23:33:01.211811    8046 main.go:141] libmachine: STDERR: 
	I0914 23:33:01.211867    8046 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/disk.qcow2 +20000M
	I0914 23:33:01.219702    8046 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:33:01.219725    8046 main.go:141] libmachine: STDERR: 
	I0914 23:33:01.219737    8046 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/disk.qcow2
	I0914 23:33:01.219742    8046 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:33:01.219754    8046 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:33:01.219790    8046 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:d8:28:16:47:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/disk.qcow2
	I0914 23:33:01.221463    8046 main.go:141] libmachine: STDOUT: 
	I0914 23:33:01.221477    8046 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:33:01.221502    8046 client.go:171] duration metric: took 320.8385ms to LocalClient.Create
	I0914 23:33:03.223643    8046 start.go:128] duration metric: took 2.396035709s to createHost
	I0914 23:33:03.223715    8046 start.go:83] releasing machines lock for "ha-603000", held for 2.396472042s
	W0914 23:33:03.224084    8046 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-603000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-603000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:33:03.240757    8046 out.go:201] 
	W0914 23:33:03.245920    8046 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:33:03.245949    8046 out.go:270] * 
	* 
	W0914 23:33:03.248566    8046 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:33:03.259751    8046 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-603000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (68.68275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (90.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-603000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (60.785792ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-603000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-603000 -- rollout status deployment/busybox: exit status 1 (58.2475ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-603000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.507041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-603000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.338208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-603000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.694167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-603000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.608625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-603000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.310541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-603000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.512042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-603000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.024917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-603000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.902791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-603000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.573ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-603000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.692583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-603000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.162042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-603000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-603000 -- exec  -- nslookup kubernetes.io: exit status 1 (58.068708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-603000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-603000 -- exec  -- nslookup kubernetes.default: exit status 1 (58.149958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-603000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-603000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.404917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-603000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (30.469292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (90.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.745458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-603000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (30.87375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-603000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-603000 -v=7 --alsologtostderr: exit status 83 (42.015833ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-603000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-603000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:34:33.887094    8128 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:34:33.887583    8128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:33.887589    8128 out.go:358] Setting ErrFile to fd 2...
	I0914 23:34:33.887592    8128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:33.887779    8128 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:34:33.887984    8128 mustload.go:65] Loading cluster: ha-603000
	I0914 23:34:33.888211    8128 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:34:33.892424    8128 out.go:177] * The control-plane node ha-603000 host is not running: state=Stopped
	I0914 23:34:33.896264    8128 out.go:177]   To start a cluster, run: "minikube start -p ha-603000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-603000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (29.959958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-603000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-603000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.094416ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-603000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-603000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-603000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (30.847292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-603000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-603000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-603000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-603000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-603000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-603000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-603000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-603000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (30.09175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-603000 status --output json -v=7 --alsologtostderr: exit status 7 (30.121083ms)

                                                
                                                
-- stdout --
	{"Name":"ha-603000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:34:34.093403    8140 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:34:34.093563    8140 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:34.093566    8140 out.go:358] Setting ErrFile to fd 2...
	I0914 23:34:34.093569    8140 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:34.093702    8140 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:34:34.093836    8140 out.go:352] Setting JSON to true
	I0914 23:34:34.093845    8140 mustload.go:65] Loading cluster: ha-603000
	I0914 23:34:34.093913    8140 notify.go:220] Checking for updates...
	I0914 23:34:34.094067    8140 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:34:34.094087    8140 status.go:255] checking status of ha-603000 ...
	I0914 23:34:34.094307    8140 status.go:330] ha-603000 host status = "Stopped" (err=<nil>)
	I0914 23:34:34.094311    8140 status.go:343] host is not running, skipping remaining checks
	I0914 23:34:34.094313    8140 status.go:257] ha-603000 status: &{Name:ha-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-603000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (30.009333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-603000 node stop m02 -v=7 --alsologtostderr: exit status 85 (46.518208ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:34:34.154136    8144 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:34:34.154751    8144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:34.154754    8144 out.go:358] Setting ErrFile to fd 2...
	I0914 23:34:34.154759    8144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:34.154951    8144 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:34:34.155204    8144 mustload.go:65] Loading cluster: ha-603000
	I0914 23:34:34.155408    8144 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:34:34.158626    8144 out.go:201] 
	W0914 23:34:34.161556    8144 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0914 23:34:34.161565    8144 out.go:270] * 
	* 
	W0914 23:34:34.163468    8144 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:34:34.167492    8144 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-603000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr: exit status 7 (30.860375ms)

                                                
                                                
-- stdout --
	ha-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:34:34.201574    8146 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:34:34.201741    8146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:34.201744    8146 out.go:358] Setting ErrFile to fd 2...
	I0914 23:34:34.201746    8146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:34.201890    8146 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:34:34.202018    8146 out.go:352] Setting JSON to false
	I0914 23:34:34.202026    8146 mustload.go:65] Loading cluster: ha-603000
	I0914 23:34:34.202093    8146 notify.go:220] Checking for updates...
	I0914 23:34:34.202214    8146 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:34:34.202220    8146 status.go:255] checking status of ha-603000 ...
	I0914 23:34:34.202460    8146 status.go:330] ha-603000 host status = "Stopped" (err=<nil>)
	I0914 23:34:34.202464    8146 status.go:343] host is not running, skipping remaining checks
	I0914 23:34:34.202466    8146 status.go:257] ha-603000 status: &{Name:ha-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr": ha-603000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr": ha-603000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr": ha-603000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr": ha-603000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (30.688542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-603000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-603000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-603000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-603000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (30.828125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (45.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-603000 node start m02 -v=7 --alsologtostderr: exit status 85 (47.824458ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:34:34.341428    8155 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:34:34.341822    8155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:34.341826    8155 out.go:358] Setting ErrFile to fd 2...
	I0914 23:34:34.341829    8155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:34.342019    8155 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:34:34.342239    8155 mustload.go:65] Loading cluster: ha-603000
	I0914 23:34:34.342426    8155 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:34:34.346598    8155 out.go:201] 
	W0914 23:34:34.350521    8155 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0914 23:34:34.350526    8155 out.go:270] * 
	* 
	W0914 23:34:34.352546    8155 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:34:34.356532    8155 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0914 23:34:34.341428    8155 out.go:345] Setting OutFile to fd 1 ...
I0914 23:34:34.341822    8155 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 23:34:34.341826    8155 out.go:358] Setting ErrFile to fd 2...
I0914 23:34:34.341829    8155 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 23:34:34.342019    8155 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
I0914 23:34:34.342239    8155 mustload.go:65] Loading cluster: ha-603000
I0914 23:34:34.342426    8155 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 23:34:34.346598    8155 out.go:201] 
W0914 23:34:34.350521    8155 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0914 23:34:34.350526    8155 out.go:270] * 
* 
W0914 23:34:34.352546    8155 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0914 23:34:34.356532    8155 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-603000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr: exit status 7 (31.481292ms)

                                                
                                                
-- stdout --
	ha-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:34:34.390307    8157 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:34:34.390428    8157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:34.390432    8157 out.go:358] Setting ErrFile to fd 2...
	I0914 23:34:34.390434    8157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:34.390593    8157 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:34:34.390705    8157 out.go:352] Setting JSON to false
	I0914 23:34:34.390714    8157 mustload.go:65] Loading cluster: ha-603000
	I0914 23:34:34.390776    8157 notify.go:220] Checking for updates...
	I0914 23:34:34.390920    8157 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:34:34.390926    8157 status.go:255] checking status of ha-603000 ...
	I0914 23:34:34.391164    8157 status.go:330] ha-603000 host status = "Stopped" (err=<nil>)
	I0914 23:34:34.391168    8157 status.go:343] host is not running, skipping remaining checks
	I0914 23:34:34.391170    8157 status.go:257] ha-603000 status: &{Name:ha-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr: exit status 7 (73.79125ms)

                                                
                                                
-- stdout --
	ha-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:34:35.467928    8162 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:34:35.468130    8162 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:35.468135    8162 out.go:358] Setting ErrFile to fd 2...
	I0914 23:34:35.468137    8162 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:35.468308    8162 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:34:35.468462    8162 out.go:352] Setting JSON to false
	I0914 23:34:35.468473    8162 mustload.go:65] Loading cluster: ha-603000
	I0914 23:34:35.468534    8162 notify.go:220] Checking for updates...
	I0914 23:34:35.468745    8162 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:34:35.468758    8162 status.go:255] checking status of ha-603000 ...
	I0914 23:34:35.469052    8162 status.go:330] ha-603000 host status = "Stopped" (err=<nil>)
	I0914 23:34:35.469057    8162 status.go:343] host is not running, skipping remaining checks
	I0914 23:34:35.469060    8162 status.go:257] ha-603000 status: &{Name:ha-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr: exit status 7 (74.919708ms)

                                                
                                                
-- stdout --
	ha-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:34:37.585875    8164 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:34:37.586074    8164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:37.586078    8164 out.go:358] Setting ErrFile to fd 2...
	I0914 23:34:37.586082    8164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:37.586251    8164 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:34:37.586429    8164 out.go:352] Setting JSON to false
	I0914 23:34:37.586447    8164 mustload.go:65] Loading cluster: ha-603000
	I0914 23:34:37.586480    8164 notify.go:220] Checking for updates...
	I0914 23:34:37.586720    8164 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:34:37.586731    8164 status.go:255] checking status of ha-603000 ...
	I0914 23:34:37.587062    8164 status.go:330] ha-603000 host status = "Stopped" (err=<nil>)
	I0914 23:34:37.587066    8164 status.go:343] host is not running, skipping remaining checks
	I0914 23:34:37.587069    8164 status.go:257] ha-603000 status: &{Name:ha-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr: exit status 7 (72.932916ms)

                                                
                                                
-- stdout --
	ha-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:34:39.335108    8166 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:34:39.335313    8166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:39.335318    8166 out.go:358] Setting ErrFile to fd 2...
	I0914 23:34:39.335321    8166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:39.335480    8166 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:34:39.335636    8166 out.go:352] Setting JSON to false
	I0914 23:34:39.335647    8166 mustload.go:65] Loading cluster: ha-603000
	I0914 23:34:39.335686    8166 notify.go:220] Checking for updates...
	I0914 23:34:39.335933    8166 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:34:39.335943    8166 status.go:255] checking status of ha-603000 ...
	I0914 23:34:39.336226    8166 status.go:330] ha-603000 host status = "Stopped" (err=<nil>)
	I0914 23:34:39.336230    8166 status.go:343] host is not running, skipping remaining checks
	I0914 23:34:39.336233    8166 status.go:257] ha-603000 status: &{Name:ha-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr: exit status 7 (75.749083ms)

                                                
                                                
-- stdout --
	ha-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:34:41.626594    8168 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:34:41.626786    8168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:41.626790    8168 out.go:358] Setting ErrFile to fd 2...
	I0914 23:34:41.626793    8168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:41.626992    8168 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:34:41.627147    8168 out.go:352] Setting JSON to false
	I0914 23:34:41.627158    8168 mustload.go:65] Loading cluster: ha-603000
	I0914 23:34:41.627214    8168 notify.go:220] Checking for updates...
	I0914 23:34:41.627444    8168 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:34:41.627452    8168 status.go:255] checking status of ha-603000 ...
	I0914 23:34:41.627753    8168 status.go:330] ha-603000 host status = "Stopped" (err=<nil>)
	I0914 23:34:41.627758    8168 status.go:343] host is not running, skipping remaining checks
	I0914 23:34:41.627761    8168 status.go:257] ha-603000 status: &{Name:ha-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr: exit status 7 (74.086083ms)

                                                
                                                
-- stdout --
	ha-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:34:46.519704    8170 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:34:46.519885    8170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:46.519889    8170 out.go:358] Setting ErrFile to fd 2...
	I0914 23:34:46.519892    8170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:46.520060    8170 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:34:46.520225    8170 out.go:352] Setting JSON to false
	I0914 23:34:46.520236    8170 mustload.go:65] Loading cluster: ha-603000
	I0914 23:34:46.520292    8170 notify.go:220] Checking for updates...
	I0914 23:34:46.520509    8170 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:34:46.520517    8170 status.go:255] checking status of ha-603000 ...
	I0914 23:34:46.520846    8170 status.go:330] ha-603000 host status = "Stopped" (err=<nil>)
	I0914 23:34:46.520851    8170 status.go:343] host is not running, skipping remaining checks
	I0914 23:34:46.520854    8170 status.go:257] ha-603000 status: &{Name:ha-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr: exit status 7 (75.014625ms)

                                                
                                                
-- stdout --
	ha-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:34:53.179265    8172 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:34:53.179461    8172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:53.179465    8172 out.go:358] Setting ErrFile to fd 2...
	I0914 23:34:53.179469    8172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:53.179637    8172 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:34:53.179801    8172 out.go:352] Setting JSON to false
	I0914 23:34:53.179813    8172 mustload.go:65] Loading cluster: ha-603000
	I0914 23:34:53.179852    8172 notify.go:220] Checking for updates...
	I0914 23:34:53.180093    8172 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:34:53.180100    8172 status.go:255] checking status of ha-603000 ...
	I0914 23:34:53.180421    8172 status.go:330] ha-603000 host status = "Stopped" (err=<nil>)
	I0914 23:34:53.180426    8172 status.go:343] host is not running, skipping remaining checks
	I0914 23:34:53.180429    8172 status.go:257] ha-603000 status: &{Name:ha-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr: exit status 7 (74.934042ms)

                                                
                                                
-- stdout --
	ha-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:34:59.263484    8174 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:34:59.263685    8174 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:59.263690    8174 out.go:358] Setting ErrFile to fd 2...
	I0914 23:34:59.263693    8174 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:34:59.263862    8174 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:34:59.264028    8174 out.go:352] Setting JSON to false
	I0914 23:34:59.264039    8174 mustload.go:65] Loading cluster: ha-603000
	I0914 23:34:59.264075    8174 notify.go:220] Checking for updates...
	I0914 23:34:59.264321    8174 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:34:59.264328    8174 status.go:255] checking status of ha-603000 ...
	I0914 23:34:59.264656    8174 status.go:330] ha-603000 host status = "Stopped" (err=<nil>)
	I0914 23:34:59.264661    8174 status.go:343] host is not running, skipping remaining checks
	I0914 23:34:59.264664    8174 status.go:257] ha-603000 status: &{Name:ha-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr: exit status 7 (74.155625ms)

                                                
                                                
-- stdout --
	ha-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:35:19.868996    8179 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:35:19.869185    8179 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:35:19.869190    8179 out.go:358] Setting ErrFile to fd 2...
	I0914 23:35:19.869193    8179 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:35:19.869349    8179 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:35:19.869500    8179 out.go:352] Setting JSON to false
	I0914 23:35:19.869512    8179 mustload.go:65] Loading cluster: ha-603000
	I0914 23:35:19.869548    8179 notify.go:220] Checking for updates...
	I0914 23:35:19.869817    8179 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:35:19.869824    8179 status.go:255] checking status of ha-603000 ...
	I0914 23:35:19.870144    8179 status.go:330] ha-603000 host status = "Stopped" (err=<nil>)
	I0914 23:35:19.870149    8179 status.go:343] host is not running, skipping remaining checks
	I0914 23:35:19.870152    8179 status.go:257] ha-603000 status: &{Name:ha-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (33.442208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (45.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-603000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-603000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-603000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-603000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-603000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-603000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-603000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-603000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (30.659458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-603000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-603000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-603000 -v=7 --alsologtostderr: (3.611436458s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-603000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-603000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.2304345s)

                                                
                                                
-- stdout --
	* [ha-603000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-603000" primary control-plane node in "ha-603000" cluster
	* Restarting existing qemu2 VM for "ha-603000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-603000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:35:23.689943    8213 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:35:23.690112    8213 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:35:23.690117    8213 out.go:358] Setting ErrFile to fd 2...
	I0914 23:35:23.690119    8213 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:35:23.690295    8213 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:35:23.691450    8213 out.go:352] Setting JSON to false
	I0914 23:35:23.710340    8213 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5692,"bootTime":1726376431,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:35:23.710443    8213 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:35:23.714547    8213 out.go:177] * [ha-603000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:35:23.722436    8213 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:35:23.722460    8213 notify.go:220] Checking for updates...
	I0914 23:35:23.726845    8213 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:35:23.730367    8213 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:35:23.737525    8213 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:35:23.740466    8213 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:35:23.743351    8213 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:35:23.746713    8213 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:35:23.746768    8213 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:35:23.751417    8213 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 23:35:23.758413    8213 start.go:297] selected driver: qemu2
	I0914 23:35:23.758419    8213 start.go:901] validating driver "qemu2" against &{Name:ha-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:35:23.758496    8213 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:35:23.760951    8213 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:35:23.760976    8213 cni.go:84] Creating CNI manager for ""
	I0914 23:35:23.761006    8213 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0914 23:35:23.761054    8213 start.go:340] cluster config:
	{Name:ha-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-603000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:35:23.764977    8213 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:35:23.772394    8213 out.go:177] * Starting "ha-603000" primary control-plane node in "ha-603000" cluster
	I0914 23:35:23.776402    8213 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:35:23.776417    8213 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:35:23.776432    8213 cache.go:56] Caching tarball of preloaded images
	I0914 23:35:23.776507    8213 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:35:23.776512    8213 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:35:23.776592    8213 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/ha-603000/config.json ...
	I0914 23:35:23.777009    8213 start.go:360] acquireMachinesLock for ha-603000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:35:23.777045    8213 start.go:364] duration metric: took 30.083µs to acquireMachinesLock for "ha-603000"
	I0914 23:35:23.777053    8213 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:35:23.777059    8213 fix.go:54] fixHost starting: 
	I0914 23:35:23.777176    8213 fix.go:112] recreateIfNeeded on ha-603000: state=Stopped err=<nil>
	W0914 23:35:23.777185    8213 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:35:23.784436    8213 out.go:177] * Restarting existing qemu2 VM for "ha-603000" ...
	I0914 23:35:23.795899    8213 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:35:23.795947    8213 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:d8:28:16:47:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/disk.qcow2
	I0914 23:35:23.798199    8213 main.go:141] libmachine: STDOUT: 
	I0914 23:35:23.798220    8213 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:35:23.798256    8213 fix.go:56] duration metric: took 21.196709ms for fixHost
	I0914 23:35:23.798261    8213 start.go:83] releasing machines lock for "ha-603000", held for 21.212167ms
	W0914 23:35:23.798267    8213 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:35:23.798299    8213 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:35:23.798304    8213 start.go:729] Will try again in 5 seconds ...
	I0914 23:35:28.800387    8213 start.go:360] acquireMachinesLock for ha-603000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:35:28.800710    8213 start.go:364] duration metric: took 261.167µs to acquireMachinesLock for "ha-603000"
	I0914 23:35:28.800838    8213 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:35:28.800862    8213 fix.go:54] fixHost starting: 
	I0914 23:35:28.801653    8213 fix.go:112] recreateIfNeeded on ha-603000: state=Stopped err=<nil>
	W0914 23:35:28.801680    8213 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:35:28.806095    8213 out.go:177] * Restarting existing qemu2 VM for "ha-603000" ...
	I0914 23:35:28.810083    8213 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:35:28.810301    8213 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:d8:28:16:47:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/disk.qcow2
	I0914 23:35:28.819421    8213 main.go:141] libmachine: STDOUT: 
	I0914 23:35:28.819530    8213 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:35:28.819604    8213 fix.go:56] duration metric: took 18.747041ms for fixHost
	I0914 23:35:28.819618    8213 start.go:83] releasing machines lock for "ha-603000", held for 18.883917ms
	W0914 23:35:28.819772    8213 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-603000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-603000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:35:28.826141    8213 out.go:201] 
	W0914 23:35:28.830175    8213 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:35:28.830245    8213 out.go:270] * 
	* 
	W0914 23:35:28.833011    8213 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:35:28.841030    8213 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-603000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-603000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (33.027667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-603000 node delete m03 -v=7 --alsologtostderr: exit status 83 (41.384417ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-603000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-603000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:35:28.986863    8225 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:35:28.987257    8225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:35:28.987260    8225 out.go:358] Setting ErrFile to fd 2...
	I0914 23:35:28.987263    8225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:35:28.987407    8225 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:35:28.987637    8225 mustload.go:65] Loading cluster: ha-603000
	I0914 23:35:28.987836    8225 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:35:28.992678    8225 out.go:177] * The control-plane node ha-603000 host is not running: state=Stopped
	I0914 23:35:28.995683    8225 out.go:177]   To start a cluster, run: "minikube start -p ha-603000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-603000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr: exit status 7 (30.756833ms)

                                                
                                                
-- stdout --
	ha-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:35:29.028540    8227 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:35:29.028687    8227 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:35:29.028690    8227 out.go:358] Setting ErrFile to fd 2...
	I0914 23:35:29.028692    8227 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:35:29.028835    8227 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:35:29.028985    8227 out.go:352] Setting JSON to false
	I0914 23:35:29.028994    8227 mustload.go:65] Loading cluster: ha-603000
	I0914 23:35:29.029066    8227 notify.go:220] Checking for updates...
	I0914 23:35:29.029231    8227 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:35:29.029237    8227 status.go:255] checking status of ha-603000 ...
	I0914 23:35:29.029476    8227 status.go:330] ha-603000 host status = "Stopped" (err=<nil>)
	I0914 23:35:29.029479    8227 status.go:343] host is not running, skipping remaining checks
	I0914 23:35:29.029482    8227 status.go:257] ha-603000 status: &{Name:ha-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (30.705625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-603000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-603000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-603000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-603000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (30.477042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (1.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-603000 stop -v=7 --alsologtostderr: (1.833650791s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr: exit status 7 (64.779209ms)

                                                
                                                
-- stdout --
	ha-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:35:31.036844    8246 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:35:31.037011    8246 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:35:31.037016    8246 out.go:358] Setting ErrFile to fd 2...
	I0914 23:35:31.037019    8246 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:35:31.037217    8246 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:35:31.037382    8246 out.go:352] Setting JSON to false
	I0914 23:35:31.037393    8246 mustload.go:65] Loading cluster: ha-603000
	I0914 23:35:31.037422    8246 notify.go:220] Checking for updates...
	I0914 23:35:31.037674    8246 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:35:31.037681    8246 status.go:255] checking status of ha-603000 ...
	I0914 23:35:31.037986    8246 status.go:330] ha-603000 host status = "Stopped" (err=<nil>)
	I0914 23:35:31.037991    8246 status.go:343] host is not running, skipping remaining checks
	I0914 23:35:31.037994    8246 status.go:257] ha-603000 status: &{Name:ha-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr": ha-603000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr": ha-603000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr": ha-603000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (32.126708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (1.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-603000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-603000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.18404525s)

                                                
                                                
-- stdout --
	* [ha-603000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-603000" primary control-plane node in "ha-603000" cluster
	* Restarting existing qemu2 VM for "ha-603000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-603000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:35:31.099891    8250 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:35:31.100030    8250 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:35:31.100034    8250 out.go:358] Setting ErrFile to fd 2...
	I0914 23:35:31.100037    8250 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:35:31.100172    8250 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:35:31.101211    8250 out.go:352] Setting JSON to false
	I0914 23:35:31.117308    8250 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5700,"bootTime":1726376431,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:35:31.117364    8250 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:35:31.122074    8250 out.go:177] * [ha-603000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:35:31.128943    8250 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:35:31.128993    8250 notify.go:220] Checking for updates...
	I0914 23:35:31.136874    8250 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:35:31.139906    8250 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:35:31.141178    8250 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:35:31.143870    8250 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:35:31.146908    8250 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:35:31.150157    8250 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:35:31.150410    8250 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:35:31.154885    8250 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 23:35:31.161944    8250 start.go:297] selected driver: qemu2
	I0914 23:35:31.161951    8250 start.go:901] validating driver "qemu2" against &{Name:ha-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:35:31.162015    8250 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:35:31.164287    8250 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:35:31.164307    8250 cni.go:84] Creating CNI manager for ""
	I0914 23:35:31.164326    8250 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0914 23:35:31.164371    8250 start.go:340] cluster config:
	{Name:ha-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-603000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:35:31.167901    8250 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:35:31.174924    8250 out.go:177] * Starting "ha-603000" primary control-plane node in "ha-603000" cluster
	I0914 23:35:31.178885    8250 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:35:31.178901    8250 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:35:31.178914    8250 cache.go:56] Caching tarball of preloaded images
	I0914 23:35:31.178983    8250 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:35:31.178988    8250 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:35:31.179040    8250 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/ha-603000/config.json ...
	I0914 23:35:31.179456    8250 start.go:360] acquireMachinesLock for ha-603000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:35:31.179487    8250 start.go:364] duration metric: took 25.375µs to acquireMachinesLock for "ha-603000"
	I0914 23:35:31.179495    8250 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:35:31.179501    8250 fix.go:54] fixHost starting: 
	I0914 23:35:31.179612    8250 fix.go:112] recreateIfNeeded on ha-603000: state=Stopped err=<nil>
	W0914 23:35:31.179621    8250 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:35:31.186857    8250 out.go:177] * Restarting existing qemu2 VM for "ha-603000" ...
	I0914 23:35:31.190882    8250 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:35:31.190920    8250 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:d8:28:16:47:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/disk.qcow2
	I0914 23:35:31.193000    8250 main.go:141] libmachine: STDOUT: 
	I0914 23:35:31.193016    8250 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:35:31.193048    8250 fix.go:56] duration metric: took 13.546791ms for fixHost
	I0914 23:35:31.193052    8250 start.go:83] releasing machines lock for "ha-603000", held for 13.561209ms
	W0914 23:35:31.193057    8250 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:35:31.193091    8250 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:35:31.193096    8250 start.go:729] Will try again in 5 seconds ...
	I0914 23:35:36.195156    8250 start.go:360] acquireMachinesLock for ha-603000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:35:36.195594    8250 start.go:364] duration metric: took 335.667µs to acquireMachinesLock for "ha-603000"
	I0914 23:35:36.195712    8250 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:35:36.195732    8250 fix.go:54] fixHost starting: 
	I0914 23:35:36.196448    8250 fix.go:112] recreateIfNeeded on ha-603000: state=Stopped err=<nil>
	W0914 23:35:36.196471    8250 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:35:36.204895    8250 out.go:177] * Restarting existing qemu2 VM for "ha-603000" ...
	I0914 23:35:36.207858    8250 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:35:36.208075    8250 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:d8:28:16:47:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/ha-603000/disk.qcow2
	I0914 23:35:36.217002    8250 main.go:141] libmachine: STDOUT: 
	I0914 23:35:36.217084    8250 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:35:36.217169    8250 fix.go:56] duration metric: took 21.432625ms for fixHost
	I0914 23:35:36.217186    8250 start.go:83] releasing machines lock for "ha-603000", held for 21.569792ms
	W0914 23:35:36.217432    8250 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-603000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-603000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:35:36.225937    8250 out.go:201] 
	W0914 23:35:36.229943    8250 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:35:36.229970    8250 out.go:270] * 
	* 
	W0914 23:35:36.232497    8250 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:35:36.241916    8250 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-603000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (68.649334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-603000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-603000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-603000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-603000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (30.923083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-603000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-603000 --control-plane -v=7 --alsologtostderr: exit status 83 (42.603292ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-603000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-603000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:35:36.434876    8268 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:35:36.435041    8268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:35:36.435044    8268 out.go:358] Setting ErrFile to fd 2...
	I0914 23:35:36.435046    8268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:35:36.435184    8268 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:35:36.435434    8268 mustload.go:65] Loading cluster: ha-603000
	I0914 23:35:36.435634    8268 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:35:36.439470    8268 out.go:177] * The control-plane node ha-603000 host is not running: state=Stopped
	I0914 23:35:36.443501    8268 out.go:177]   To start a cluster, run: "minikube start -p ha-603000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-603000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (31.082292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-603000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-603000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-603000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-603000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-603000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-603000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-603000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-603000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (30.88225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-370000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-370000 --driver=qemu2 : exit status 80 (9.946777209s)

                                                
                                                
-- stdout --
	* [image-370000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-370000" primary control-plane node in "image-370000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-370000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-370000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-370000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-370000 -n image-370000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-370000 -n image-370000: exit status 7 (52.76575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-370000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.00s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.86s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-733000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-733000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.858741458s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1bdb8d7c-0a39-4fba-8a6d-0e6c56dd6ef7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-733000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9da944fa-5a8e-40c5-ba86-209be54f7b29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19644"}}
	{"specversion":"1.0","id":"d140c866-90d3-45b1-b4fb-56a2741915ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig"}}
	{"specversion":"1.0","id":"dee16a22-54ef-43df-b651-c9fc82c3e3f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"4f1464db-0c1c-46f3-ad1d-bbc487fe05e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0642cc3a-995b-443a-9281-9f702a61cf94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube"}}
	{"specversion":"1.0","id":"b88251a0-2594-4e5d-81e9-f617c7fffc43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cbac7856-9e63-42d8-a782-70884d3df76e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0b999c2c-16b2-4fa8-9f72-fb1cb4e53eff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"6bf82550-e3d0-4f1c-b46d-9ea39947d32b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-733000\" primary control-plane node in \"json-output-733000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6b714734-d6cc-494f-9974-e35662f5f0e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"b813669b-1a3b-45c6-8a1a-d9e05a39f265","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-733000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"bec88409-964e-4f77-9c53-ed657c283912","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"a4c3867d-8226-4fa0-88c3-c35de72fe6ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"c881fcdc-10ae-4eb7-b602-ab6838634693","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-733000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"fbae0ba2-b73f-4581-b1c4-078b9dbcd6c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"d052c100-17e8-4343-98f5-7a73596fbe7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-733000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.86s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-733000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-733000 --output=json --user=testUser: exit status 83 (80.112625ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cf8dcbde-3549-4442-9c50-37735c146dd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-733000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"8d061b2a-870a-40bd-9bad-766309e54cd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-733000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-733000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-733000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-733000 --output=json --user=testUser: exit status 83 (46.193666ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-733000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-733000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-733000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-733000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.77s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-888000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-888000 --driver=qemu2 : exit status 80 (10.465029167s)

                                                
                                                
-- stdout --
	* [first-888000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-888000" primary control-plane node in "first-888000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-888000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-888000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-888000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-14 23:36:10.909337 -0700 PDT m=+446.096921459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-889000 -n second-889000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-889000 -n second-889000: exit status 85 (81.320917ms)

                                                
                                                
-- stdout --
	* Profile "second-889000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-889000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-889000" host is not running, skipping log retrieval (state="* Profile \"second-889000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-889000\"")
helpers_test.go:175: Cleaning up "second-889000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-889000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-14 23:36:11.099623 -0700 PDT m=+446.287210542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-888000 -n first-888000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-888000 -n first-888000: exit status 7 (30.815583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-888000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-888000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-888000
--- FAIL: TestMinikubeProfile (10.77s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.11s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-852000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-852000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.043617667s)

                                                
                                                
-- stdout --
	* [mount-start-1-852000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-852000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-852000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-852000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-852000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-852000 -n mount-start-1-852000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-852000 -n mount-start-1-852000: exit status 7 (68.955417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-852000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.11s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-053000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-053000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.897447417s)

                                                
                                                
-- stdout --
	* [multinode-053000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-053000" primary control-plane node in "multinode-053000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-053000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:36:21.532226    8414 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:36:21.532373    8414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:36:21.532376    8414 out.go:358] Setting ErrFile to fd 2...
	I0914 23:36:21.532379    8414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:36:21.532517    8414 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:36:21.533594    8414 out.go:352] Setting JSON to false
	I0914 23:36:21.550081    8414 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5750,"bootTime":1726376431,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:36:21.550147    8414 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:36:21.556164    8414 out.go:177] * [multinode-053000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:36:21.564180    8414 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:36:21.564222    8414 notify.go:220] Checking for updates...
	I0914 23:36:21.571039    8414 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:36:21.574040    8414 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:36:21.578068    8414 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:36:21.581015    8414 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:36:21.584085    8414 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:36:21.587255    8414 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:36:21.591037    8414 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 23:36:21.598100    8414 start.go:297] selected driver: qemu2
	I0914 23:36:21.598107    8414 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:36:21.598114    8414 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:36:21.600512    8414 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:36:21.603003    8414 out.go:177] * Automatically selected the socket_vmnet network
	I0914 23:36:21.607150    8414 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:36:21.607165    8414 cni.go:84] Creating CNI manager for ""
	I0914 23:36:21.607181    8414 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0914 23:36:21.607184    8414 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 23:36:21.607207    8414 start.go:340] cluster config:
	{Name:multinode-053000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-053000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:36:21.610798    8414 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:36:21.619065    8414 out.go:177] * Starting "multinode-053000" primary control-plane node in "multinode-053000" cluster
	I0914 23:36:21.623071    8414 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:36:21.623088    8414 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:36:21.623105    8414 cache.go:56] Caching tarball of preloaded images
	I0914 23:36:21.623188    8414 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:36:21.623197    8414 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:36:21.623379    8414 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/multinode-053000/config.json ...
	I0914 23:36:21.623391    8414 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/multinode-053000/config.json: {Name:mk8bfad4b9e20802ef45c8cc8aafa7d764431cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:36:21.623615    8414 start.go:360] acquireMachinesLock for multinode-053000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:36:21.623649    8414 start.go:364] duration metric: took 28.167µs to acquireMachinesLock for "multinode-053000"
	I0914 23:36:21.623660    8414 start.go:93] Provisioning new machine with config: &{Name:multinode-053000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-053000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:36:21.623702    8414 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:36:21.632055    8414 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 23:36:21.649661    8414 start.go:159] libmachine.API.Create for "multinode-053000" (driver="qemu2")
	I0914 23:36:21.649690    8414 client.go:168] LocalClient.Create starting
	I0914 23:36:21.649750    8414 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:36:21.649783    8414 main.go:141] libmachine: Decoding PEM data...
	I0914 23:36:21.649794    8414 main.go:141] libmachine: Parsing certificate...
	I0914 23:36:21.649832    8414 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:36:21.649856    8414 main.go:141] libmachine: Decoding PEM data...
	I0914 23:36:21.649866    8414 main.go:141] libmachine: Parsing certificate...
	I0914 23:36:21.650327    8414 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:36:21.813472    8414 main.go:141] libmachine: Creating SSH key...
	I0914 23:36:21.911152    8414 main.go:141] libmachine: Creating Disk image...
	I0914 23:36:21.911158    8414 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:36:21.911399    8414 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/disk.qcow2
	I0914 23:36:21.920727    8414 main.go:141] libmachine: STDOUT: 
	I0914 23:36:21.920746    8414 main.go:141] libmachine: STDERR: 
	I0914 23:36:21.920806    8414 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/disk.qcow2 +20000M
	I0914 23:36:21.928636    8414 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:36:21.928651    8414 main.go:141] libmachine: STDERR: 
	I0914 23:36:21.928664    8414 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/disk.qcow2
	I0914 23:36:21.928669    8414 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:36:21.928681    8414 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:36:21.928711    8414 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:de:18:88:3e:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/disk.qcow2
	I0914 23:36:21.930320    8414 main.go:141] libmachine: STDOUT: 
	I0914 23:36:21.930345    8414 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:36:21.930372    8414 client.go:171] duration metric: took 280.68125ms to LocalClient.Create
	I0914 23:36:23.932518    8414 start.go:128] duration metric: took 2.308832333s to createHost
	I0914 23:36:23.932577    8414 start.go:83] releasing machines lock for "multinode-053000", held for 2.308962125s
	W0914 23:36:23.932666    8414 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:36:23.943702    8414 out.go:177] * Deleting "multinode-053000" in qemu2 ...
	W0914 23:36:23.986455    8414 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:36:23.986484    8414 start.go:729] Will try again in 5 seconds ...
	I0914 23:36:28.988608    8414 start.go:360] acquireMachinesLock for multinode-053000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:36:28.989099    8414 start.go:364] duration metric: took 402.459µs to acquireMachinesLock for "multinode-053000"
	I0914 23:36:28.989232    8414 start.go:93] Provisioning new machine with config: &{Name:multinode-053000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-053000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:36:28.989505    8414 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:36:29.008034    8414 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 23:36:29.061265    8414 start.go:159] libmachine.API.Create for "multinode-053000" (driver="qemu2")
	I0914 23:36:29.061313    8414 client.go:168] LocalClient.Create starting
	I0914 23:36:29.061417    8414 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:36:29.061487    8414 main.go:141] libmachine: Decoding PEM data...
	I0914 23:36:29.061504    8414 main.go:141] libmachine: Parsing certificate...
	I0914 23:36:29.061563    8414 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:36:29.061607    8414 main.go:141] libmachine: Decoding PEM data...
	I0914 23:36:29.061647    8414 main.go:141] libmachine: Parsing certificate...
	I0914 23:36:29.062177    8414 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:36:29.235393    8414 main.go:141] libmachine: Creating SSH key...
	I0914 23:36:29.327970    8414 main.go:141] libmachine: Creating Disk image...
	I0914 23:36:29.327976    8414 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:36:29.328202    8414 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/disk.qcow2
	I0914 23:36:29.337401    8414 main.go:141] libmachine: STDOUT: 
	I0914 23:36:29.337416    8414 main.go:141] libmachine: STDERR: 
	I0914 23:36:29.337473    8414 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/disk.qcow2 +20000M
	I0914 23:36:29.345361    8414 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:36:29.345375    8414 main.go:141] libmachine: STDERR: 
	I0914 23:36:29.345387    8414 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/disk.qcow2
	I0914 23:36:29.345392    8414 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:36:29.345405    8414 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:36:29.345434    8414 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:58:43:52:7c:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/disk.qcow2
	I0914 23:36:29.347126    8414 main.go:141] libmachine: STDOUT: 
	I0914 23:36:29.347141    8414 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:36:29.347153    8414 client.go:171] duration metric: took 285.838458ms to LocalClient.Create
	I0914 23:36:31.349289    8414 start.go:128] duration metric: took 2.359791083s to createHost
	I0914 23:36:31.349349    8414 start.go:83] releasing machines lock for "multinode-053000", held for 2.360269458s
	W0914 23:36:31.349748    8414 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-053000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-053000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:36:31.367541    8414 out.go:201] 
	W0914 23:36:31.371485    8414 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:36:31.371523    8414 out.go:270] * 
	* 
	W0914 23:36:31.374252    8414 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:36:31.387441    8414 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-053000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (70.931959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (77.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-053000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-053000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (61.387459ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-053000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-053000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-053000 -- rollout status deployment/busybox: exit status 1 (58.059834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.531708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.38625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.850542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.457625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.810041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.80775ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.812167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.293667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.530125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.943375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.349041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.448333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-053000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-053000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.097834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-053000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-053000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.8915ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-053000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-053000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.150917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (30.796959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (77.21s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.228583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (30.696541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-053000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-053000 -v 3 --alsologtostderr: exit status 83 (41.544125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-053000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-053000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:37:48.802871    8494 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:37:48.803043    8494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:37:48.803047    8494 out.go:358] Setting ErrFile to fd 2...
	I0914 23:37:48.803049    8494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:37:48.803183    8494 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:37:48.803429    8494 mustload.go:65] Loading cluster: multinode-053000
	I0914 23:37:48.803630    8494 config.go:182] Loaded profile config "multinode-053000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:37:48.806725    8494 out.go:177] * The control-plane node multinode-053000 host is not running: state=Stopped
	I0914 23:37:48.810471    8494 out.go:177]   To start a cluster, run: "minikube start -p multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-053000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (30.535958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-053000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-053000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.815208ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-053000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-053000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-053000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (30.808167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-053000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-053000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-053000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-053000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (30.228542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-053000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-053000 status --output json --alsologtostderr: exit status 7 (30.460542ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-053000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:37:49.008567    8506 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:37:49.008725    8506 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:37:49.008728    8506 out.go:358] Setting ErrFile to fd 2...
	I0914 23:37:49.008730    8506 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:37:49.008870    8506 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:37:49.008992    8506 out.go:352] Setting JSON to true
	I0914 23:37:49.009000    8506 mustload.go:65] Loading cluster: multinode-053000
	I0914 23:37:49.009069    8506 notify.go:220] Checking for updates...
	I0914 23:37:49.009209    8506 config.go:182] Loaded profile config "multinode-053000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:37:49.009215    8506 status.go:255] checking status of multinode-053000 ...
	I0914 23:37:49.009463    8506 status.go:330] multinode-053000 host status = "Stopped" (err=<nil>)
	I0914 23:37:49.009467    8506 status.go:343] host is not running, skipping remaining checks
	I0914 23:37:49.009469    8506 status.go:257] multinode-053000 status: &{Name:multinode-053000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-053000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (30.454625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-053000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-053000 node stop m03: exit status 85 (45.321958ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-053000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-053000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-053000 status: exit status 7 (31.312458ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-053000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-053000 status --alsologtostderr: exit status 7 (30.522917ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:37:49.147041    8514 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:37:49.147196    8514 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:37:49.147199    8514 out.go:358] Setting ErrFile to fd 2...
	I0914 23:37:49.147202    8514 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:37:49.147344    8514 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:37:49.147466    8514 out.go:352] Setting JSON to false
	I0914 23:37:49.147476    8514 mustload.go:65] Loading cluster: multinode-053000
	I0914 23:37:49.147535    8514 notify.go:220] Checking for updates...
	I0914 23:37:49.147681    8514 config.go:182] Loaded profile config "multinode-053000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:37:49.147687    8514 status.go:255] checking status of multinode-053000 ...
	I0914 23:37:49.147938    8514 status.go:330] multinode-053000 host status = "Stopped" (err=<nil>)
	I0914 23:37:49.147941    8514 status.go:343] host is not running, skipping remaining checks
	I0914 23:37:49.147943    8514 status.go:257] multinode-053000 status: &{Name:multinode-053000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-053000 status --alsologtostderr": multinode-053000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (30.332417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (53.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-053000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-053000 node start m03 -v=7 --alsologtostderr: exit status 85 (49.279083ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:37:49.208215    8518 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:37:49.208609    8518 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:37:49.208613    8518 out.go:358] Setting ErrFile to fd 2...
	I0914 23:37:49.208615    8518 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:37:49.208784    8518 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:37:49.208988    8518 mustload.go:65] Loading cluster: multinode-053000
	I0914 23:37:49.209195    8518 config.go:182] Loaded profile config "multinode-053000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:37:49.213938    8518 out.go:201] 
	W0914 23:37:49.217728    8518 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0914 23:37:49.217734    8518 out.go:270] * 
	* 
	W0914 23:37:49.219651    8518 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:37:49.223845    8518 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0914 23:37:49.208215    8518 out.go:345] Setting OutFile to fd 1 ...
I0914 23:37:49.208609    8518 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 23:37:49.208613    8518 out.go:358] Setting ErrFile to fd 2...
I0914 23:37:49.208615    8518 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 23:37:49.208784    8518 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
I0914 23:37:49.208988    8518 mustload.go:65] Loading cluster: multinode-053000
I0914 23:37:49.209195    8518 config.go:182] Loaded profile config "multinode-053000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 23:37:49.213938    8518 out.go:201] 
W0914 23:37:49.217728    8518 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0914 23:37:49.217734    8518 out.go:270] * 
* 
W0914 23:37:49.219651    8518 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0914 23:37:49.223845    8518 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-053000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-053000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-053000 status -v=7 --alsologtostderr: exit status 7 (31.126ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:37:49.258318    8520 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:37:49.258482    8520 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:37:49.258485    8520 out.go:358] Setting ErrFile to fd 2...
	I0914 23:37:49.258488    8520 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:37:49.258598    8520 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:37:49.258728    8520 out.go:352] Setting JSON to false
	I0914 23:37:49.258736    8520 mustload.go:65] Loading cluster: multinode-053000
	I0914 23:37:49.258794    8520 notify.go:220] Checking for updates...
	I0914 23:37:49.258962    8520 config.go:182] Loaded profile config "multinode-053000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:37:49.258968    8520 status.go:255] checking status of multinode-053000 ...
	I0914 23:37:49.259211    8520 status.go:330] multinode-053000 host status = "Stopped" (err=<nil>)
	I0914 23:37:49.259214    8520 status.go:343] host is not running, skipping remaining checks
	I0914 23:37:49.259216    8520 status.go:257] multinode-053000 status: &{Name:multinode-053000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-053000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-053000 status -v=7 --alsologtostderr: exit status 7 (71.461042ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:37:50.098146    8522 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:37:50.098363    8522 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:37:50.098367    8522 out.go:358] Setting ErrFile to fd 2...
	I0914 23:37:50.098371    8522 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:37:50.098539    8522 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:37:50.098704    8522 out.go:352] Setting JSON to false
	I0914 23:37:50.098714    8522 mustload.go:65] Loading cluster: multinode-053000
	I0914 23:37:50.098751    8522 notify.go:220] Checking for updates...
	I0914 23:37:50.098979    8522 config.go:182] Loaded profile config "multinode-053000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:37:50.098987    8522 status.go:255] checking status of multinode-053000 ...
	I0914 23:37:50.099317    8522 status.go:330] multinode-053000 host status = "Stopped" (err=<nil>)
	I0914 23:37:50.099322    8522 status.go:343] host is not running, skipping remaining checks
	I0914 23:37:50.099325    8522 status.go:257] multinode-053000 status: &{Name:multinode-053000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-053000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-053000 status -v=7 --alsologtostderr: exit status 7 (71.688208ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:37:51.487658    8524 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:37:51.487867    8524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:37:51.487871    8524 out.go:358] Setting ErrFile to fd 2...
	I0914 23:37:51.487874    8524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:37:51.488066    8524 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:37:51.488238    8524 out.go:352] Setting JSON to false
	I0914 23:37:51.488250    8524 mustload.go:65] Loading cluster: multinode-053000
	I0914 23:37:51.488295    8524 notify.go:220] Checking for updates...
	I0914 23:37:51.488511    8524 config.go:182] Loaded profile config "multinode-053000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:37:51.488520    8524 status.go:255] checking status of multinode-053000 ...
	I0914 23:37:51.488825    8524 status.go:330] multinode-053000 host status = "Stopped" (err=<nil>)
	I0914 23:37:51.488830    8524 status.go:343] host is not running, skipping remaining checks
	I0914 23:37:51.488833    8524 status.go:257] multinode-053000 status: &{Name:multinode-053000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-053000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-053000 status -v=7 --alsologtostderr: exit status 7 (76.870875ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:37:53.479564    8526 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:37:53.479787    8526 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:37:53.479794    8526 out.go:358] Setting ErrFile to fd 2...
	I0914 23:37:53.479798    8526 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:37:53.479989    8526 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:37:53.480170    8526 out.go:352] Setting JSON to false
	I0914 23:37:53.480182    8526 mustload.go:65] Loading cluster: multinode-053000
	I0914 23:37:53.480226    8526 notify.go:220] Checking for updates...
	I0914 23:37:53.480473    8526 config.go:182] Loaded profile config "multinode-053000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:37:53.480482    8526 status.go:255] checking status of multinode-053000 ...
	I0914 23:37:53.480840    8526 status.go:330] multinode-053000 host status = "Stopped" (err=<nil>)
	I0914 23:37:53.480845    8526 status.go:343] host is not running, skipping remaining checks
	I0914 23:37:53.480848    8526 status.go:257] multinode-053000 status: &{Name:multinode-053000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-053000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-053000 status -v=7 --alsologtostderr: exit status 7 (74.937ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:37:58.195476    8528 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:37:58.195653    8528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:37:58.195657    8528 out.go:358] Setting ErrFile to fd 2...
	I0914 23:37:58.195661    8528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:37:58.195848    8528 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:37:58.196009    8528 out.go:352] Setting JSON to false
	I0914 23:37:58.196020    8528 mustload.go:65] Loading cluster: multinode-053000
	I0914 23:37:58.196048    8528 notify.go:220] Checking for updates...
	I0914 23:37:58.196287    8528 config.go:182] Loaded profile config "multinode-053000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:37:58.196296    8528 status.go:255] checking status of multinode-053000 ...
	I0914 23:37:58.196615    8528 status.go:330] multinode-053000 host status = "Stopped" (err=<nil>)
	I0914 23:37:58.196620    8528 status.go:343] host is not running, skipping remaining checks
	I0914 23:37:58.196623    8528 status.go:257] multinode-053000 status: &{Name:multinode-053000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-053000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-053000 status -v=7 --alsologtostderr: exit status 7 (75.3745ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:38:05.134473    8531 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:38:05.134669    8531 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:38:05.134673    8531 out.go:358] Setting ErrFile to fd 2...
	I0914 23:38:05.134677    8531 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:38:05.134844    8531 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:38:05.135025    8531 out.go:352] Setting JSON to false
	I0914 23:38:05.135036    8531 mustload.go:65] Loading cluster: multinode-053000
	I0914 23:38:05.135077    8531 notify.go:220] Checking for updates...
	I0914 23:38:05.135326    8531 config.go:182] Loaded profile config "multinode-053000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:38:05.135334    8531 status.go:255] checking status of multinode-053000 ...
	I0914 23:38:05.135662    8531 status.go:330] multinode-053000 host status = "Stopped" (err=<nil>)
	I0914 23:38:05.135667    8531 status.go:343] host is not running, skipping remaining checks
	I0914 23:38:05.135670    8531 status.go:257] multinode-053000 status: &{Name:multinode-053000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-053000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-053000 status -v=7 --alsologtostderr: exit status 7 (74.068ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:38:13.767667    8533 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:38:13.767858    8533 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:38:13.767863    8533 out.go:358] Setting ErrFile to fd 2...
	I0914 23:38:13.767867    8533 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:38:13.768032    8533 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:38:13.768199    8533 out.go:352] Setting JSON to false
	I0914 23:38:13.768210    8533 mustload.go:65] Loading cluster: multinode-053000
	I0914 23:38:13.768256    8533 notify.go:220] Checking for updates...
	I0914 23:38:13.768495    8533 config.go:182] Loaded profile config "multinode-053000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:38:13.768505    8533 status.go:255] checking status of multinode-053000 ...
	I0914 23:38:13.768826    8533 status.go:330] multinode-053000 host status = "Stopped" (err=<nil>)
	I0914 23:38:13.768832    8533 status.go:343] host is not running, skipping remaining checks
	I0914 23:38:13.768835    8533 status.go:257] multinode-053000 status: &{Name:multinode-053000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-053000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-053000 status -v=7 --alsologtostderr: exit status 7 (72.96275ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:38:19.997741    8535 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:38:19.997958    8535 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:38:19.997962    8535 out.go:358] Setting ErrFile to fd 2...
	I0914 23:38:19.997966    8535 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:38:19.998130    8535 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:38:19.998298    8535 out.go:352] Setting JSON to false
	I0914 23:38:19.998309    8535 mustload.go:65] Loading cluster: multinode-053000
	I0914 23:38:19.998356    8535 notify.go:220] Checking for updates...
	I0914 23:38:19.998598    8535 config.go:182] Loaded profile config "multinode-053000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:38:19.998610    8535 status.go:255] checking status of multinode-053000 ...
	I0914 23:38:19.998923    8535 status.go:330] multinode-053000 host status = "Stopped" (err=<nil>)
	I0914 23:38:19.998928    8535 status.go:343] host is not running, skipping remaining checks
	I0914 23:38:19.998931    8535 status.go:257] multinode-053000 status: &{Name:multinode-053000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-053000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-053000 status -v=7 --alsologtostderr: exit status 7 (77.203292ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:38:42.401888    8537 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:38:42.402082    8537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:38:42.402087    8537 out.go:358] Setting ErrFile to fd 2...
	I0914 23:38:42.402091    8537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:38:42.402264    8537 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:38:42.402446    8537 out.go:352] Setting JSON to false
	I0914 23:38:42.402459    8537 mustload.go:65] Loading cluster: multinode-053000
	I0914 23:38:42.402501    8537 notify.go:220] Checking for updates...
	I0914 23:38:42.402752    8537 config.go:182] Loaded profile config "multinode-053000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:38:42.402765    8537 status.go:255] checking status of multinode-053000 ...
	I0914 23:38:42.403113    8537 status.go:330] multinode-053000 host status = "Stopped" (err=<nil>)
	I0914 23:38:42.403117    8537 status.go:343] host is not running, skipping remaining checks
	I0914 23:38:42.403120    8537 status.go:257] multinode-053000 status: &{Name:multinode-053000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-053000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (34.076583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (53.26s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-053000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-053000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-053000: (2.080916959s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-053000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-053000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.224856458s)

                                                
                                                
-- stdout --
	* [multinode-053000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-053000" primary control-plane node in "multinode-053000" cluster
	* Restarting existing qemu2 VM for "multinode-053000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-053000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:38:44.614882    8555 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:38:44.615045    8555 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:38:44.615049    8555 out.go:358] Setting ErrFile to fd 2...
	I0914 23:38:44.615052    8555 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:38:44.615232    8555 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:38:44.616426    8555 out.go:352] Setting JSON to false
	I0914 23:38:44.635908    8555 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5893,"bootTime":1726376431,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:38:44.635984    8555 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:38:44.641484    8555 out.go:177] * [multinode-053000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:38:44.649390    8555 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:38:44.649434    8555 notify.go:220] Checking for updates...
	I0914 23:38:44.656336    8555 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:38:44.659388    8555 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:38:44.662341    8555 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:38:44.665394    8555 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:38:44.668389    8555 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:38:44.670076    8555 config.go:182] Loaded profile config "multinode-053000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:38:44.670144    8555 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:38:44.673310    8555 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 23:38:44.680295    8555 start.go:297] selected driver: qemu2
	I0914 23:38:44.680306    8555 start.go:901] validating driver "qemu2" against &{Name:multinode-053000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-053000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:38:44.680386    8555 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:38:44.682967    8555 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:38:44.683002    8555 cni.go:84] Creating CNI manager for ""
	I0914 23:38:44.683032    8555 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0914 23:38:44.683091    8555 start.go:340] cluster config:
	{Name:multinode-053000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-053000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:38:44.686905    8555 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:38:44.695385    8555 out.go:177] * Starting "multinode-053000" primary control-plane node in "multinode-053000" cluster
	I0914 23:38:44.699314    8555 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:38:44.699330    8555 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:38:44.699341    8555 cache.go:56] Caching tarball of preloaded images
	I0914 23:38:44.699408    8555 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:38:44.699415    8555 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:38:44.699480    8555 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/multinode-053000/config.json ...
	I0914 23:38:44.699923    8555 start.go:360] acquireMachinesLock for multinode-053000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:38:44.699957    8555 start.go:364] duration metric: took 28.333µs to acquireMachinesLock for "multinode-053000"
	I0914 23:38:44.699966    8555 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:38:44.699972    8555 fix.go:54] fixHost starting: 
	I0914 23:38:44.700091    8555 fix.go:112] recreateIfNeeded on multinode-053000: state=Stopped err=<nil>
	W0914 23:38:44.700103    8555 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:38:44.708344    8555 out.go:177] * Restarting existing qemu2 VM for "multinode-053000" ...
	I0914 23:38:44.712357    8555 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:38:44.712396    8555 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:58:43:52:7c:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/disk.qcow2
	I0914 23:38:44.714518    8555 main.go:141] libmachine: STDOUT: 
	I0914 23:38:44.714538    8555 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:38:44.714569    8555 fix.go:56] duration metric: took 14.597417ms for fixHost
	I0914 23:38:44.714573    8555 start.go:83] releasing machines lock for "multinode-053000", held for 14.611958ms
	W0914 23:38:44.714579    8555 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:38:44.714624    8555 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:38:44.714629    8555 start.go:729] Will try again in 5 seconds ...
	I0914 23:38:49.716778    8555 start.go:360] acquireMachinesLock for multinode-053000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:38:49.717262    8555 start.go:364] duration metric: took 377.625µs to acquireMachinesLock for "multinode-053000"
	I0914 23:38:49.717409    8555 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:38:49.717428    8555 fix.go:54] fixHost starting: 
	I0914 23:38:49.718165    8555 fix.go:112] recreateIfNeeded on multinode-053000: state=Stopped err=<nil>
	W0914 23:38:49.718190    8555 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:38:49.725707    8555 out.go:177] * Restarting existing qemu2 VM for "multinode-053000" ...
	I0914 23:38:49.729675    8555 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:38:49.729976    8555 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:58:43:52:7c:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/disk.qcow2
	I0914 23:38:49.739817    8555 main.go:141] libmachine: STDOUT: 
	I0914 23:38:49.739872    8555 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:38:49.739961    8555 fix.go:56] duration metric: took 22.532334ms for fixHost
	I0914 23:38:49.739979    8555 start.go:83] releasing machines lock for "multinode-053000", held for 22.692ms
	W0914 23:38:49.740113    8555 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-053000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-053000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:38:49.747686    8555 out.go:201] 
	W0914 23:38:49.751628    8555 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:38:49.751645    8555 out.go:270] * 
	* 
	W0914 23:38:49.753679    8555 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:38:49.762652    8555 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-053000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-053000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (33.436625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.44s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-053000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-053000 node delete m03: exit status 83 (42.831084ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-053000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-053000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-053000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-053000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-053000 status --alsologtostderr: exit status 7 (29.783417ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:38:49.951935    8569 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:38:49.952103    8569 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:38:49.952106    8569 out.go:358] Setting ErrFile to fd 2...
	I0914 23:38:49.952109    8569 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:38:49.952248    8569 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:38:49.952370    8569 out.go:352] Setting JSON to false
	I0914 23:38:49.952384    8569 mustload.go:65] Loading cluster: multinode-053000
	I0914 23:38:49.952448    8569 notify.go:220] Checking for updates...
	I0914 23:38:49.952588    8569 config.go:182] Loaded profile config "multinode-053000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:38:49.952593    8569 status.go:255] checking status of multinode-053000 ...
	I0914 23:38:49.952822    8569 status.go:330] multinode-053000 host status = "Stopped" (err=<nil>)
	I0914 23:38:49.952826    8569 status.go:343] host is not running, skipping remaining checks
	I0914 23:38:49.952829    8569 status.go:257] multinode-053000 status: &{Name:multinode-053000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-053000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (30.33325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-053000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-053000 stop: (3.777452416s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-053000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-053000 status: exit status 7 (65.425792ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-053000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-053000 status --alsologtostderr: exit status 7 (33.287083ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:38:53.858999    8595 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:38:53.859163    8595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:38:53.859166    8595 out.go:358] Setting ErrFile to fd 2...
	I0914 23:38:53.859169    8595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:38:53.859303    8595 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:38:53.859418    8595 out.go:352] Setting JSON to false
	I0914 23:38:53.859427    8595 mustload.go:65] Loading cluster: multinode-053000
	I0914 23:38:53.859475    8595 notify.go:220] Checking for updates...
	I0914 23:38:53.859626    8595 config.go:182] Loaded profile config "multinode-053000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:38:53.859633    8595 status.go:255] checking status of multinode-053000 ...
	I0914 23:38:53.859890    8595 status.go:330] multinode-053000 host status = "Stopped" (err=<nil>)
	I0914 23:38:53.859894    8595 status.go:343] host is not running, skipping remaining checks
	I0914 23:38:53.859896    8595 status.go:257] multinode-053000 status: &{Name:multinode-053000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-053000 status --alsologtostderr": multinode-053000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-053000 status --alsologtostderr": multinode-053000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (30.64975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-053000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-053000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.183141417s)

                                                
                                                
-- stdout --
	* [multinode-053000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-053000" primary control-plane node in "multinode-053000" cluster
	* Restarting existing qemu2 VM for "multinode-053000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-053000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:38:53.920221    8599 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:38:53.920357    8599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:38:53.920361    8599 out.go:358] Setting ErrFile to fd 2...
	I0914 23:38:53.920363    8599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:38:53.920495    8599 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:38:53.921527    8599 out.go:352] Setting JSON to false
	I0914 23:38:53.937358    8599 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5902,"bootTime":1726376431,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:38:53.937430    8599 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:38:53.941308    8599 out.go:177] * [multinode-053000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:38:53.948194    8599 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:38:53.948255    8599 notify.go:220] Checking for updates...
	I0914 23:38:53.955264    8599 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:38:53.958210    8599 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:38:53.962215    8599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:38:53.965256    8599 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:38:53.968161    8599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:38:53.971499    8599 config.go:182] Loaded profile config "multinode-053000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:38:53.971769    8599 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:38:53.976235    8599 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 23:38:53.983211    8599 start.go:297] selected driver: qemu2
	I0914 23:38:53.983217    8599 start.go:901] validating driver "qemu2" against &{Name:multinode-053000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-053000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:38:53.983271    8599 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:38:53.985637    8599 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:38:53.985664    8599 cni.go:84] Creating CNI manager for ""
	I0914 23:38:53.985695    8599 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0914 23:38:53.985737    8599 start.go:340] cluster config:
	{Name:multinode-053000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-053000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:38:53.989405    8599 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:38:53.998207    8599 out.go:177] * Starting "multinode-053000" primary control-plane node in "multinode-053000" cluster
	I0914 23:38:54.002210    8599 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:38:54.002230    8599 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:38:54.002243    8599 cache.go:56] Caching tarball of preloaded images
	I0914 23:38:54.002302    8599 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:38:54.002307    8599 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:38:54.002366    8599 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/multinode-053000/config.json ...
	I0914 23:38:54.002797    8599 start.go:360] acquireMachinesLock for multinode-053000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:38:54.002824    8599 start.go:364] duration metric: took 21.166µs to acquireMachinesLock for "multinode-053000"
	I0914 23:38:54.002832    8599 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:38:54.002837    8599 fix.go:54] fixHost starting: 
	I0914 23:38:54.002953    8599 fix.go:112] recreateIfNeeded on multinode-053000: state=Stopped err=<nil>
	W0914 23:38:54.002961    8599 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:38:54.007220    8599 out.go:177] * Restarting existing qemu2 VM for "multinode-053000" ...
	I0914 23:38:54.015059    8599 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:38:54.015096    8599 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:58:43:52:7c:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/disk.qcow2
	I0914 23:38:54.017104    8599 main.go:141] libmachine: STDOUT: 
	I0914 23:38:54.017120    8599 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:38:54.017151    8599 fix.go:56] duration metric: took 14.3135ms for fixHost
	I0914 23:38:54.017156    8599 start.go:83] releasing machines lock for "multinode-053000", held for 14.328292ms
	W0914 23:38:54.017162    8599 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:38:54.017203    8599 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:38:54.017208    8599 start.go:729] Will try again in 5 seconds ...
	I0914 23:38:59.018620    8599 start.go:360] acquireMachinesLock for multinode-053000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:38:59.019063    8599 start.go:364] duration metric: took 356.041µs to acquireMachinesLock for "multinode-053000"
	I0914 23:38:59.019186    8599 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:38:59.019206    8599 fix.go:54] fixHost starting: 
	I0914 23:38:59.019928    8599 fix.go:112] recreateIfNeeded on multinode-053000: state=Stopped err=<nil>
	W0914 23:38:59.019955    8599 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:38:59.027328    8599 out.go:177] * Restarting existing qemu2 VM for "multinode-053000" ...
	I0914 23:38:59.031273    8599 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:38:59.031521    8599 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:58:43:52:7c:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/multinode-053000/disk.qcow2
	I0914 23:38:59.040305    8599 main.go:141] libmachine: STDOUT: 
	I0914 23:38:59.040371    8599 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:38:59.040448    8599 fix.go:56] duration metric: took 21.238667ms for fixHost
	I0914 23:38:59.040476    8599 start.go:83] releasing machines lock for "multinode-053000", held for 21.386459ms
	W0914 23:38:59.040678    8599 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-053000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-053000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:38:59.048280    8599 out.go:201] 
	W0914 23:38:59.051382    8599 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:38:59.051407    8599 out.go:270] * 
	* 
	W0914 23:38:59.053771    8599 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:38:59.061301    8599 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-053000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (72.707875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-053000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-053000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-053000-m01 --driver=qemu2 : exit status 80 (10.156345916s)

                                                
                                                
-- stdout --
	* [multinode-053000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-053000-m01" primary control-plane node in "multinode-053000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-053000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-053000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-053000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-053000-m02 --driver=qemu2 : exit status 80 (10.183195125s)

                                                
                                                
-- stdout --
	* [multinode-053000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-053000-m02" primary control-plane node in "multinode-053000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-053000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-053000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-053000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-053000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-053000: exit status 83 (80.185625ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-053000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-053000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-053000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (31.194208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.57s)

                                                
                                    
x
+
TestPreload (10.22s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-322000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-322000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.068086875s)

                                                
                                                
-- stdout --
	* [test-preload-322000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-322000" primary control-plane node in "test-preload-322000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-322000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:39:19.873188    8651 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:39:19.873300    8651 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:39:19.873303    8651 out.go:358] Setting ErrFile to fd 2...
	I0914 23:39:19.873306    8651 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:39:19.873449    8651 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:39:19.874433    8651 out.go:352] Setting JSON to false
	I0914 23:39:19.890508    8651 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5928,"bootTime":1726376431,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:39:19.890579    8651 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:39:19.897299    8651 out.go:177] * [test-preload-322000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:39:19.904259    8651 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:39:19.904367    8651 notify.go:220] Checking for updates...
	I0914 23:39:19.912250    8651 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:39:19.915224    8651 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:39:19.918272    8651 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:39:19.921324    8651 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:39:19.924226    8651 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:39:19.927533    8651 config.go:182] Loaded profile config "multinode-053000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:39:19.927583    8651 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:39:19.931233    8651 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 23:39:19.938258    8651 start.go:297] selected driver: qemu2
	I0914 23:39:19.938266    8651 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:39:19.938273    8651 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:39:19.940705    8651 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:39:19.944254    8651 out.go:177] * Automatically selected the socket_vmnet network
	I0914 23:39:19.945730    8651 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:39:19.945754    8651 cni.go:84] Creating CNI manager for ""
	I0914 23:39:19.945807    8651 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:39:19.945813    8651 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 23:39:19.945843    8651 start.go:340] cluster config:
	{Name:test-preload-322000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-322000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:39:19.949657    8651 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:39:19.958268    8651 out.go:177] * Starting "test-preload-322000" primary control-plane node in "test-preload-322000" cluster
	I0914 23:39:19.962280    8651 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0914 23:39:19.962369    8651 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/test-preload-322000/config.json ...
	I0914 23:39:19.962387    8651 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/test-preload-322000/config.json: {Name:mk6d42b2290fc6b59ac8ac3da19888ab5df25e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:39:19.962409    8651 cache.go:107] acquiring lock: {Name:mk514f94bfdd47feb2d2a83a732e5d28cc5e1120 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:39:19.962420    8651 cache.go:107] acquiring lock: {Name:mk710f3a1918dcec37c643d787ee0764a7c6d2e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:39:19.962429    8651 cache.go:107] acquiring lock: {Name:mk038e47a527268628d7c1c921439be12deb76aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:39:19.962446    8651 cache.go:107] acquiring lock: {Name:mkdeb706b51b41101d8410322d34764eae4e0cce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:39:19.962579    8651 cache.go:107] acquiring lock: {Name:mka487969fdd144bc753d07ad231998f15800aa0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:39:19.962664    8651 cache.go:107] acquiring lock: {Name:mk75e46ab7d6ad0401cb8e61d815c149045b8144 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:39:19.962674    8651 cache.go:107] acquiring lock: {Name:mke8c2f27e2afc8b2baa09e84272d9099f90c9f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:39:19.962409    8651 cache.go:107] acquiring lock: {Name:mk5e11b68a12bbe945f4c2a5d4ad7e6b8c898509 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:39:19.962761    8651 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0914 23:39:19.962796    8651 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:39:19.962804    8651 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0914 23:39:19.962812    8651 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0914 23:39:19.962855    8651 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 23:39:19.962913    8651 start.go:360] acquireMachinesLock for test-preload-322000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:39:19.962954    8651 start.go:364] duration metric: took 32.875µs to acquireMachinesLock for "test-preload-322000"
	I0914 23:39:19.962967    8651 start.go:93] Provisioning new machine with config: &{Name:test-preload-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-322000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:39:19.963002    8651 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0914 23:39:19.963004    8651 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:39:19.963014    8651 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0914 23:39:19.963052    8651 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0914 23:39:19.967230    8651 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 23:39:19.976566    8651 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 23:39:19.976614    8651 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0914 23:39:19.977186    8651 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:39:19.977295    8651 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0914 23:39:19.978939    8651 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0914 23:39:19.978931    8651 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0914 23:39:19.979011    8651 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0914 23:39:19.979026    8651 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0914 23:39:19.985847    8651 start.go:159] libmachine.API.Create for "test-preload-322000" (driver="qemu2")
	I0914 23:39:19.985868    8651 client.go:168] LocalClient.Create starting
	I0914 23:39:19.985935    8651 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:39:19.985968    8651 main.go:141] libmachine: Decoding PEM data...
	I0914 23:39:19.985980    8651 main.go:141] libmachine: Parsing certificate...
	I0914 23:39:19.986020    8651 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:39:19.986045    8651 main.go:141] libmachine: Decoding PEM data...
	I0914 23:39:19.986054    8651 main.go:141] libmachine: Parsing certificate...
	I0914 23:39:19.986442    8651 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:39:20.152093    8651 main.go:141] libmachine: Creating SSH key...
	I0914 23:39:20.380300    8651 cache.go:162] opening:  /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	W0914 23:39:20.405319    8651 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0914 23:39:20.405337    8651 cache.go:162] opening:  /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0914 23:39:20.419974    8651 cache.go:162] opening:  /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0914 23:39:20.425638    8651 cache.go:162] opening:  /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0914 23:39:20.426516    8651 main.go:141] libmachine: Creating Disk image...
	I0914 23:39:20.426530    8651 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:39:20.426796    8651 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/test-preload-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/test-preload-322000/disk.qcow2
	I0914 23:39:20.436416    8651 main.go:141] libmachine: STDOUT: 
	I0914 23:39:20.436434    8651 main.go:141] libmachine: STDERR: 
	I0914 23:39:20.436487    8651 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/test-preload-322000/disk.qcow2 +20000M
	I0914 23:39:20.444757    8651 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:39:20.444775    8651 main.go:141] libmachine: STDERR: 
	I0914 23:39:20.444787    8651 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/test-preload-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/test-preload-322000/disk.qcow2
	I0914 23:39:20.444791    8651 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:39:20.444803    8651 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:39:20.444848    8651 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/test-preload-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/test-preload-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/test-preload-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:72:e3:b9:fe:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/test-preload-322000/disk.qcow2
	I0914 23:39:20.446664    8651 main.go:141] libmachine: STDOUT: 
	I0914 23:39:20.446681    8651 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:39:20.446700    8651 client.go:171] duration metric: took 460.835083ms to LocalClient.Create
	I0914 23:39:20.453710    8651 cache.go:162] opening:  /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0914 23:39:20.458377    8651 cache.go:162] opening:  /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0914 23:39:20.493575    8651 cache.go:162] opening:  /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0914 23:39:20.602053    8651 cache.go:157] /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0914 23:39:20.602077    8651 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 639.657541ms
	I0914 23:39:20.602095    8651 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0914 23:39:20.940724    8651 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0914 23:39:20.940821    8651 cache.go:162] opening:  /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 23:39:21.758280    8651 cache.go:157] /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0914 23:39:21.758325    8651 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.795948084s
	I0914 23:39:21.758376    8651 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0914 23:39:22.446912    8651 start.go:128] duration metric: took 2.483920375s to createHost
	I0914 23:39:22.447011    8651 start.go:83] releasing machines lock for "test-preload-322000", held for 2.48409325s
	W0914 23:39:22.447068    8651 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:39:22.465539    8651 out.go:177] * Deleting "test-preload-322000" in qemu2 ...
	W0914 23:39:22.501357    8651 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:39:22.501392    8651 start.go:729] Will try again in 5 seconds ...
	I0914 23:39:22.527434    8651 cache.go:157] /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0914 23:39:22.527482    8651 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.564973208s
	I0914 23:39:22.527521    8651 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0914 23:39:22.605662    8651 cache.go:157] /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0914 23:39:22.605708    8651 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.643306792s
	I0914 23:39:22.605748    8651 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0914 23:39:25.092769    8651 cache.go:157] /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0914 23:39:25.092816    8651 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.13029725s
	I0914 23:39:25.092843    8651 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0914 23:39:25.107751    8651 cache.go:157] /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0914 23:39:25.107800    8651 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.145485792s
	I0914 23:39:25.107841    8651 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0914 23:39:25.478554    8651 cache.go:157] /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0914 23:39:25.478604    8651 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.516298792s
	I0914 23:39:25.478631    8651 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0914 23:39:27.501486    8651 start.go:360] acquireMachinesLock for test-preload-322000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:39:27.501956    8651 start.go:364] duration metric: took 390.833µs to acquireMachinesLock for "test-preload-322000"
	I0914 23:39:27.502079    8651 start.go:93] Provisioning new machine with config: &{Name:test-preload-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-322000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:39:27.502321    8651 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:39:27.512565    8651 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 23:39:27.565014    8651 start.go:159] libmachine.API.Create for "test-preload-322000" (driver="qemu2")
	I0914 23:39:27.565158    8651 client.go:168] LocalClient.Create starting
	I0914 23:39:27.565281    8651 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:39:27.565342    8651 main.go:141] libmachine: Decoding PEM data...
	I0914 23:39:27.565366    8651 main.go:141] libmachine: Parsing certificate...
	I0914 23:39:27.565430    8651 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:39:27.565474    8651 main.go:141] libmachine: Decoding PEM data...
	I0914 23:39:27.565485    8651 main.go:141] libmachine: Parsing certificate...
	I0914 23:39:27.565990    8651 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:39:27.736843    8651 main.go:141] libmachine: Creating SSH key...
	I0914 23:39:27.844659    8651 main.go:141] libmachine: Creating Disk image...
	I0914 23:39:27.844665    8651 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:39:27.844925    8651 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/test-preload-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/test-preload-322000/disk.qcow2
	I0914 23:39:27.854532    8651 main.go:141] libmachine: STDOUT: 
	I0914 23:39:27.854549    8651 main.go:141] libmachine: STDERR: 
	I0914 23:39:27.854608    8651 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/test-preload-322000/disk.qcow2 +20000M
	I0914 23:39:27.862591    8651 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:39:27.862607    8651 main.go:141] libmachine: STDERR: 
	I0914 23:39:27.862618    8651 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/test-preload-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/test-preload-322000/disk.qcow2
	I0914 23:39:27.862623    8651 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:39:27.862639    8651 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:39:27.862671    8651 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/test-preload-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/test-preload-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/test-preload-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:24:47:aa:24:c2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/test-preload-322000/disk.qcow2
	I0914 23:39:27.864354    8651 main.go:141] libmachine: STDOUT: 
	I0914 23:39:27.864368    8651 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:39:27.864380    8651 client.go:171] duration metric: took 299.221334ms to LocalClient.Create
	I0914 23:39:28.761337    8651 cache.go:157] /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0914 23:39:28.761391    8651 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.798894084s
	I0914 23:39:28.761433    8651 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0914 23:39:28.761486    8651 cache.go:87] Successfully saved all images to host disk.
	I0914 23:39:29.865650    8651 start.go:128] duration metric: took 2.363344834s to createHost
	I0914 23:39:29.865733    8651 start.go:83] releasing machines lock for "test-preload-322000", held for 2.363773084s
	W0914 23:39:29.866004    8651 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:39:29.875615    8651 out.go:201] 
	W0914 23:39:29.884715    8651 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:39:29.884767    8651 out.go:270] * 
	* 
	W0914 23:39:29.887473    8651 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:39:29.897413    8651 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-322000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-09-14 23:39:29.915353 -0700 PDT m=+645.106723042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-322000 -n test-preload-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-322000 -n test-preload-322000: exit status 7 (66.124291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-322000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-322000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-322000
--- FAIL: TestPreload (10.22s)

                                                
                                    
x
+
TestScheduledStopUnix (10.12s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-649000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-649000 --memory=2048 --driver=qemu2 : exit status 80 (9.981715292s)

                                                
                                                
-- stdout --
	* [scheduled-stop-649000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-649000" primary control-plane node in "scheduled-stop-649000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-649000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-649000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-649000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-649000" primary control-plane node in "scheduled-stop-649000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-649000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-649000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-14 23:39:40.044608 -0700 PDT m=+655.236170917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-649000 -n scheduled-stop-649000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-649000 -n scheduled-stop-649000: exit status 7 (55.956334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-649000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-649000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-649000
--- FAIL: TestScheduledStopUnix (10.12s)

                                                
                                    
x
+
TestSkaffold (12.06s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe4097723058 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe4097723058 version: (1.061308333s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-647000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-647000 --memory=2600 --driver=qemu2 : exit status 80 (9.874467208s)

                                                
                                                
-- stdout --
	* [skaffold-647000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-647000" primary control-plane node in "skaffold-647000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-647000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-647000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-647000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-647000" primary control-plane node in "skaffold-647000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-647000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-647000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-09-14 23:39:52.09616 -0700 PDT m=+667.287951834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-647000 -n skaffold-647000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-647000 -n skaffold-647000: exit status 7 (63.209583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-647000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-647000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-647000
--- FAIL: TestSkaffold (12.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (619.57s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1369443912 start -p running-upgrade-386000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1369443912 start -p running-upgrade-386000 --memory=2200 --vm-driver=qemu2 : (1m1.729950792s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-386000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-386000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.937756333s)

                                                
                                                
-- stdout --
	* [running-upgrade-386000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-386000" primary control-plane node in "running-upgrade-386000" cluster
	* Updating the running qemu2 "running-upgrade-386000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:41:17.966512    8967 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:41:17.966701    8967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:41:17.966704    8967 out.go:358] Setting ErrFile to fd 2...
	I0914 23:41:17.966707    8967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:41:17.966825    8967 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:41:17.967759    8967 out.go:352] Setting JSON to false
	I0914 23:41:17.984317    8967 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6046,"bootTime":1726376431,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:41:17.984442    8967 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:41:17.988565    8967 out.go:177] * [running-upgrade-386000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:41:17.996576    8967 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:41:17.996598    8967 notify.go:220] Checking for updates...
	I0914 23:41:18.004477    8967 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:41:18.008577    8967 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:41:18.009967    8967 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:41:18.012533    8967 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:41:18.015525    8967 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:41:18.018890    8967 config.go:182] Loaded profile config "running-upgrade-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 23:41:18.021523    8967 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0914 23:41:18.024532    8967 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:41:18.027564    8967 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 23:41:18.034563    8967 start.go:297] selected driver: qemu2
	I0914 23:41:18.034569    8967 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51345 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-386000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0914 23:41:18.034622    8967 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:41:18.036883    8967 cni.go:84] Creating CNI manager for ""
	I0914 23:41:18.036922    8967 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:41:18.036946    8967 start.go:340] cluster config:
	{Name:running-upgrade-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51345 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-386000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0914 23:41:18.036991    8967 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:41:18.045522    8967 out.go:177] * Starting "running-upgrade-386000" primary control-plane node in "running-upgrade-386000" cluster
	I0914 23:41:18.049533    8967 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0914 23:41:18.049549    8967 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0914 23:41:18.049558    8967 cache.go:56] Caching tarball of preloaded images
	I0914 23:41:18.049614    8967 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:41:18.049621    8967 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0914 23:41:18.049682    8967 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/config.json ...
	I0914 23:41:18.050062    8967 start.go:360] acquireMachinesLock for running-upgrade-386000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:41:30.317947    8967 start.go:364] duration metric: took 12.268107834s to acquireMachinesLock for "running-upgrade-386000"
	I0914 23:41:30.317964    8967 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:41:30.317975    8967 fix.go:54] fixHost starting: 
	I0914 23:41:30.318717    8967 fix.go:112] recreateIfNeeded on running-upgrade-386000: state=Running err=<nil>
	W0914 23:41:30.318728    8967 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:41:30.323476    8967 out.go:177] * Updating the running qemu2 "running-upgrade-386000" VM ...
	I0914 23:41:30.330513    8967 machine.go:93] provisionDockerMachine start ...
	I0914 23:41:30.330624    8967 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:30.330789    8967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104cb9190] 0x104cbb9d0 <nil>  [] 0s} localhost 51266 <nil> <nil>}
	I0914 23:41:30.330793    8967 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 23:41:30.403800    8967 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-386000
	
	I0914 23:41:30.403817    8967 buildroot.go:166] provisioning hostname "running-upgrade-386000"
	I0914 23:41:30.403858    8967 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:30.403982    8967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104cb9190] 0x104cbb9d0 <nil>  [] 0s} localhost 51266 <nil> <nil>}
	I0914 23:41:30.403989    8967 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-386000 && echo "running-upgrade-386000" | sudo tee /etc/hostname
	I0914 23:41:30.481936    8967 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-386000
	
	I0914 23:41:30.482001    8967 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:30.482133    8967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104cb9190] 0x104cbb9d0 <nil>  [] 0s} localhost 51266 <nil> <nil>}
	I0914 23:41:30.482142    8967 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-386000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-386000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-386000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 23:41:30.556964    8967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 23:41:30.556979    8967 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19644-6577/.minikube CaCertPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19644-6577/.minikube}
	I0914 23:41:30.556988    8967 buildroot.go:174] setting up certificates
	I0914 23:41:30.556998    8967 provision.go:84] configureAuth start
	I0914 23:41:30.557005    8967 provision.go:143] copyHostCerts
	I0914 23:41:30.557075    8967 exec_runner.go:144] found /Users/jenkins/minikube-integration/19644-6577/.minikube/cert.pem, removing ...
	I0914 23:41:30.557084    8967 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19644-6577/.minikube/cert.pem
	I0914 23:41:30.557194    8967 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19644-6577/.minikube/cert.pem (1123 bytes)
	I0914 23:41:30.557362    8967 exec_runner.go:144] found /Users/jenkins/minikube-integration/19644-6577/.minikube/key.pem, removing ...
	I0914 23:41:30.557367    8967 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19644-6577/.minikube/key.pem
	I0914 23:41:30.557413    8967 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19644-6577/.minikube/key.pem (1679 bytes)
	I0914 23:41:30.557516    8967 exec_runner.go:144] found /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.pem, removing ...
	I0914 23:41:30.557520    8967 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.pem
	I0914 23:41:30.557564    8967 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.pem (1082 bytes)
	I0914 23:41:30.557653    8967 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-386000 san=[127.0.0.1 localhost minikube running-upgrade-386000]
	I0914 23:41:30.599354    8967 provision.go:177] copyRemoteCerts
	I0914 23:41:30.599402    8967 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 23:41:30.599411    8967 sshutil.go:53] new ssh client: &{IP:localhost Port:51266 SSHKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/running-upgrade-386000/id_rsa Username:docker}
	I0914 23:41:30.650108    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 23:41:30.666407    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 23:41:30.672910    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0914 23:41:30.686009    8967 provision.go:87] duration metric: took 128.994209ms to configureAuth
	I0914 23:41:30.686024    8967 buildroot.go:189] setting minikube options for container-runtime
	I0914 23:41:30.686158    8967 config.go:182] Loaded profile config "running-upgrade-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 23:41:30.686201    8967 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:30.686298    8967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104cb9190] 0x104cbb9d0 <nil>  [] 0s} localhost 51266 <nil> <nil>}
	I0914 23:41:30.686305    8967 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0914 23:41:30.794128    8967 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0914 23:41:30.794149    8967 buildroot.go:70] root file system type: tmpfs
	I0914 23:41:30.794218    8967 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0914 23:41:30.794289    8967 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:30.794425    8967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104cb9190] 0x104cbb9d0 <nil>  [] 0s} localhost 51266 <nil> <nil>}
	I0914 23:41:30.794464    8967 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0914 23:41:30.910822    8967 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0914 23:41:30.910889    8967 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:30.911007    8967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104cb9190] 0x104cbb9d0 <nil>  [] 0s} localhost 51266 <nil> <nil>}
	I0914 23:41:30.911015    8967 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0914 23:41:30.994446    8967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 23:41:30.994463    8967 machine.go:96] duration metric: took 663.950041ms to provisionDockerMachine
	I0914 23:41:30.994469    8967 start.go:293] postStartSetup for "running-upgrade-386000" (driver="qemu2")
	I0914 23:41:30.994477    8967 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 23:41:30.994562    8967 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 23:41:30.994574    8967 sshutil.go:53] new ssh client: &{IP:localhost Port:51266 SSHKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/running-upgrade-386000/id_rsa Username:docker}
	I0914 23:41:31.040345    8967 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 23:41:31.041718    8967 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 23:41:31.041725    8967 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19644-6577/.minikube/addons for local assets ...
	I0914 23:41:31.041805    8967 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19644-6577/.minikube/files for local assets ...
	I0914 23:41:31.041893    8967 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19644-6577/.minikube/files/etc/ssl/certs/70932.pem -> 70932.pem in /etc/ssl/certs
	I0914 23:41:31.041989    8967 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 23:41:31.044549    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/files/etc/ssl/certs/70932.pem --> /etc/ssl/certs/70932.pem (1708 bytes)
	I0914 23:41:31.052269    8967 start.go:296] duration metric: took 57.793708ms for postStartSetup
	I0914 23:41:31.052288    8967 fix.go:56] duration metric: took 734.334583ms for fixHost
	I0914 23:41:31.052345    8967 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:31.052467    8967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104cb9190] 0x104cbb9d0 <nil>  [] 0s} localhost 51266 <nil> <nil>}
	I0914 23:41:31.052473    8967 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 23:41:31.133363    8967 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726382491.145148723
	
	I0914 23:41:31.133375    8967 fix.go:216] guest clock: 1726382491.145148723
	I0914 23:41:31.133379    8967 fix.go:229] Guest: 2024-09-14 23:41:31.145148723 -0700 PDT Remote: 2024-09-14 23:41:31.05229 -0700 PDT m=+13.107698585 (delta=92.858723ms)
	I0914 23:41:31.133396    8967 fix.go:200] guest clock delta is within tolerance: 92.858723ms
	I0914 23:41:31.133399    8967 start.go:83] releasing machines lock for "running-upgrade-386000", held for 815.459083ms
	I0914 23:41:31.133476    8967 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 23:41:31.133478    8967 ssh_runner.go:195] Run: cat /version.json
	I0914 23:41:31.133497    8967 sshutil.go:53] new ssh client: &{IP:localhost Port:51266 SSHKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/running-upgrade-386000/id_rsa Username:docker}
	I0914 23:41:31.133502    8967 sshutil.go:53] new ssh client: &{IP:localhost Port:51266 SSHKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/running-upgrade-386000/id_rsa Username:docker}
	W0914 23:41:31.134132    8967 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:51488->127.0.0.1:51266: read: connection reset by peer
	I0914 23:41:31.134153    8967 retry.go:31] will retry after 151.454862ms: ssh: handshake failed: read tcp 127.0.0.1:51488->127.0.0.1:51266: read: connection reset by peer
	W0914 23:41:31.326944    8967 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0914 23:41:31.327024    8967 ssh_runner.go:195] Run: systemctl --version
	I0914 23:41:31.328866    8967 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 23:41:31.330424    8967 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 23:41:31.330454    8967 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0914 23:41:31.333123    8967 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0914 23:41:31.337377    8967 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 23:41:31.337386    8967 start.go:495] detecting cgroup driver to use...
	I0914 23:41:31.337461    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 23:41:31.342864    8967 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0914 23:41:31.346925    8967 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 23:41:31.350339    8967 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 23:41:31.350376    8967 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 23:41:31.353553    8967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 23:41:31.356572    8967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 23:41:31.359555    8967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 23:41:31.362980    8967 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 23:41:31.366291    8967 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 23:41:31.369397    8967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0914 23:41:31.372200    8967 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0914 23:41:31.375428    8967 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 23:41:31.378830    8967 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 23:41:31.381715    8967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:41:31.472533    8967 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 23:41:31.480325    8967 start.go:495] detecting cgroup driver to use...
	I0914 23:41:31.480394    8967 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0914 23:41:31.488632    8967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 23:41:31.497531    8967 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 23:41:31.505944    8967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 23:41:31.510954    8967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 23:41:31.515740    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 23:41:31.522415    8967 ssh_runner.go:195] Run: which cri-dockerd
	I0914 23:41:31.523954    8967 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0914 23:41:31.526531    8967 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0914 23:41:31.531618    8967 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0914 23:41:31.633049    8967 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0914 23:41:31.740831    8967 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0914 23:41:31.740885    8967 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0914 23:41:31.746042    8967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:41:31.851097    8967 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 23:41:39.392611    8967 ssh_runner.go:235] Completed: sudo systemctl restart docker: (7.541639625s)
	I0914 23:41:39.392675    8967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0914 23:41:39.397824    8967 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0914 23:41:39.405632    8967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0914 23:41:39.410729    8967 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0914 23:41:39.496505    8967 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 23:41:39.584311    8967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:41:39.664118    8967 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0914 23:41:39.670362    8967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0914 23:41:39.675094    8967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:41:39.772089    8967 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0914 23:41:39.812725    8967 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0914 23:41:39.812816    8967 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0914 23:41:39.815441    8967 start.go:563] Will wait 60s for crictl version
	I0914 23:41:39.815498    8967 ssh_runner.go:195] Run: which crictl
	I0914 23:41:39.817441    8967 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 23:41:39.829576    8967 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0914 23:41:39.829654    8967 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 23:41:39.842366    8967 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 23:41:39.861646    8967 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0914 23:41:39.861731    8967 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0914 23:41:39.863096    8967 kubeadm.go:883] updating cluster {Name:running-upgrade-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51345 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-386000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0914 23:41:39.863142    8967 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0914 23:41:39.863191    8967 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 23:41:39.874438    8967 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0914 23:41:39.874447    8967 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0914 23:41:39.874508    8967 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 23:41:39.877857    8967 ssh_runner.go:195] Run: which lz4
	I0914 23:41:39.879376    8967 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 23:41:39.880846    8967 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 23:41:39.880861    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0914 23:41:40.865514    8967 docker.go:649] duration metric: took 986.197084ms to copy over tarball
	I0914 23:41:40.865586    8967 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 23:41:42.282135    8967 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.41655625s)
	I0914 23:41:42.282148    8967 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 23:41:42.297815    8967 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 23:41:42.300696    8967 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0914 23:41:42.305894    8967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:41:42.394564    8967 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 23:41:43.626409    8967 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.231853208s)
	I0914 23:41:43.626532    8967 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 23:41:43.646960    8967 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0914 23:41:43.646969    8967 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0914 23:41:43.646975    8967 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 23:41:43.650702    8967 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0914 23:41:43.652277    8967 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:41:43.654427    8967 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0914 23:41:43.654519    8967 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0914 23:41:43.656442    8967 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:41:43.656590    8967 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0914 23:41:43.658091    8967 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0914 23:41:43.658248    8967 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 23:41:43.659602    8967 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0914 23:41:43.659695    8967 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0914 23:41:43.660605    8967 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 23:41:43.660709    8967 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 23:41:43.661604    8967 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0914 23:41:43.661655    8967 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0914 23:41:43.662310    8967 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 23:41:43.663000    8967 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0914 23:41:44.004530    8967 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0914 23:41:44.018013    8967 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0914 23:41:44.018040    8967 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0914 23:41:44.018102    8967 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0914 23:41:44.028764    8967 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0914 23:41:44.029826    8967 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0914 23:41:44.029921    8967 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0914 23:41:44.040587    8967 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0914 23:41:44.040610    8967 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0914 23:41:44.040614    8967 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0914 23:41:44.040641    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0914 23:41:44.040670    8967 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0914 23:41:44.042737    8967 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0914 23:41:44.053230    8967 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0914 23:41:44.053243    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0914 23:41:44.058442    8967 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0914 23:41:44.063654    8967 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0914 23:41:44.063810    8967 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0914 23:41:44.067965    8967 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0914 23:41:44.067987    8967 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0914 23:41:44.068061    8967 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0914 23:41:44.095233    8967 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0914 23:41:44.095591    8967 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0914 23:41:44.095609    8967 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 23:41:44.095645    8967 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0914 23:41:44.095828    8967 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0914 23:41:44.106420    8967 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 23:41:44.110024    8967 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0914 23:41:44.110088    8967 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0914 23:41:44.110107    8967 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0914 23:41:44.110126    8967 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0914 23:41:44.110138    8967 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0914 23:41:44.110713    8967 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0914 23:41:44.110779    8967 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0914 23:41:44.122424    8967 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0914 23:41:44.122447    8967 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 23:41:44.122511    8967 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 23:41:44.125095    8967 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0914 23:41:44.125124    8967 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0914 23:41:44.125140    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0914 23:41:44.125173    8967 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0914 23:41:44.125183    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0914 23:41:44.146979    8967 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0914 23:41:44.150875    8967 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0914 23:41:44.190675    8967 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0914 23:41:44.190700    8967 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0914 23:41:44.190776    8967 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0914 23:41:44.221036    8967 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0914 23:41:44.221061    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0914 23:41:44.233924    8967 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0914 23:41:44.316400    8967 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0914 23:41:44.436334    8967 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0914 23:41:44.436349    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0914 23:41:44.573033    8967 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0914 23:41:44.573152    8967 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:41:44.578210    8967 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0914 23:41:44.584162    8967 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0914 23:41:44.584182    8967 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:41:44.584256    8967 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:41:44.594931    8967 cache_images.go:92] duration metric: took 947.966ms to LoadCachedImages
	W0914 23:41:44.594974    8967 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0914 23:41:44.594987    8967 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0914 23:41:44.595037    8967 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-386000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-386000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 23:41:44.595115    8967 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0914 23:41:44.616700    8967 cni.go:84] Creating CNI manager for ""
	I0914 23:41:44.616711    8967 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:41:44.616716    8967 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 23:41:44.616725    8967 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-386000 NodeName:running-upgrade-386000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 23:41:44.616794    8967 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-386000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 23:41:44.616861    8967 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0914 23:41:44.620556    8967 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 23:41:44.620594    8967 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 23:41:44.623378    8967 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0914 23:41:44.629197    8967 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 23:41:44.634898    8967 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0914 23:41:44.640785    8967 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0914 23:41:44.642456    8967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:41:44.728724    8967 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 23:41:44.733967    8967 certs.go:68] Setting up /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000 for IP: 10.0.2.15
	I0914 23:41:44.733974    8967 certs.go:194] generating shared ca certs ...
	I0914 23:41:44.733982    8967 certs.go:226] acquiring lock for ca certs: {Name:mkfb6b8e69b171081d1b5cff0d9e65dd76b6a9c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:41:44.734127    8967 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.key
	I0914 23:41:44.734160    8967 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/proxy-client-ca.key
	I0914 23:41:44.734169    8967 certs.go:256] generating profile certs ...
	I0914 23:41:44.734242    8967 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/client.key
	I0914 23:41:44.734264    8967 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/apiserver.key.89f5340c
	I0914 23:41:44.734275    8967 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/apiserver.crt.89f5340c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0914 23:41:44.868615    8967 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/apiserver.crt.89f5340c ...
	I0914 23:41:44.868625    8967 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/apiserver.crt.89f5340c: {Name:mkd0124a77422e53adfb1ec4736c793193ce0844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:41:44.868927    8967 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/apiserver.key.89f5340c ...
	I0914 23:41:44.868934    8967 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/apiserver.key.89f5340c: {Name:mk182419fee4e7490848bfa85ed65c73f6d45bd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:41:44.869072    8967 certs.go:381] copying /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/apiserver.crt.89f5340c -> /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/apiserver.crt
	I0914 23:41:44.870074    8967 certs.go:385] copying /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/apiserver.key.89f5340c -> /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/apiserver.key
	I0914 23:41:44.870263    8967 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/proxy-client.key
	I0914 23:41:44.870398    8967 certs.go:484] found cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/7093.pem (1338 bytes)
	W0914 23:41:44.870425    8967 certs.go:480] ignoring /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/7093_empty.pem, impossibly tiny 0 bytes
	I0914 23:41:44.870430    8967 certs.go:484] found cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca-key.pem (1675 bytes)
	I0914 23:41:44.870450    8967 certs.go:484] found cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem (1082 bytes)
	I0914 23:41:44.870472    8967 certs.go:484] found cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem (1123 bytes)
	I0914 23:41:44.870491    8967 certs.go:484] found cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/key.pem (1679 bytes)
	I0914 23:41:44.870532    8967 certs.go:484] found cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/files/etc/ssl/certs/70932.pem (1708 bytes)
	I0914 23:41:44.870869    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 23:41:44.878606    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 23:41:44.886349    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 23:41:44.893160    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 23:41:44.900054    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 23:41:44.906742    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 23:41:44.913994    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 23:41:44.921828    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 23:41:44.929078    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/7093.pem --> /usr/share/ca-certificates/7093.pem (1338 bytes)
	I0914 23:41:44.935928    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/files/etc/ssl/certs/70932.pem --> /usr/share/ca-certificates/70932.pem (1708 bytes)
	I0914 23:41:44.942523    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 23:41:44.949797    8967 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 23:41:44.955028    8967 ssh_runner.go:195] Run: openssl version
	I0914 23:41:44.956954    8967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7093.pem && ln -fs /usr/share/ca-certificates/7093.pem /etc/ssl/certs/7093.pem"
	I0914 23:41:44.960062    8967 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7093.pem
	I0914 23:41:44.961471    8967 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 06:29 /usr/share/ca-certificates/7093.pem
	I0914 23:41:44.961506    8967 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7093.pem
	I0914 23:41:44.963323    8967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7093.pem /etc/ssl/certs/51391683.0"
	I0914 23:41:44.966496    8967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70932.pem && ln -fs /usr/share/ca-certificates/70932.pem /etc/ssl/certs/70932.pem"
	I0914 23:41:44.970063    8967 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70932.pem
	I0914 23:41:44.971601    8967 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 06:29 /usr/share/ca-certificates/70932.pem
	I0914 23:41:44.971627    8967 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70932.pem
	I0914 23:41:44.973403    8967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/70932.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 23:41:44.976276    8967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 23:41:44.979220    8967 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 23:41:44.980830    8967 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:40 /usr/share/ca-certificates/minikubeCA.pem
	I0914 23:41:44.980854    8967 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 23:41:44.982799    8967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 23:41:44.985917    8967 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 23:41:44.987499    8967 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 23:41:44.989338    8967 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 23:41:44.991308    8967 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 23:41:44.993459    8967 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 23:41:44.996205    8967 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 23:41:44.997917    8967 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 23:41:44.999943    8967 kubeadm.go:392] StartCluster: {Name:running-upgrade-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51345 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-386000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0914 23:41:45.000033    8967 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 23:41:45.010536    8967 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 23:41:45.014670    8967 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 23:41:45.014679    8967 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 23:41:45.014708    8967 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 23:41:45.018173    8967 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:41:45.018460    8967 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-386000" does not appear in /Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:41:45.018551    8967 kubeconfig.go:62] /Users/jenkins/minikube-integration/19644-6577/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-386000" cluster setting kubeconfig missing "running-upgrade-386000" context setting]
	I0914 23:41:45.018723    8967 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/kubeconfig: {Name:mke334fd43bb51604954449e74caf7f81dee5b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:41:45.019156    8967 kapi.go:59] client config for running-upgrade-386000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/client.key", CAFile:"/Users/jenkins/minikube-integration/19644-6577/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106291800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 23:41:45.019508    8967 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 23:41:45.022388    8967 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-386000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0914 23:41:45.022398    8967 kubeadm.go:1160] stopping kube-system containers ...
	I0914 23:41:45.022448    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 23:41:45.033849    8967 docker.go:483] Stopping containers: [6287f0754e12 46673287e658 ecb52bb838c2 8b51eca867bc ff64f5b5c01a d971e5f7858d 810d336c4764 cdbc600fbbb4 0469132abd7c e723f75a5293 066727c3a39e 05233d01ab13 53d75f5be566 8e2b4c6925a4 b999b318bbc3 ed8d75c41830 e456f04c65a9 fa5f013636cd 1d8885933eb9 910604735e4b 2fa63c68886d]
	I0914 23:41:45.033925    8967 ssh_runner.go:195] Run: docker stop 6287f0754e12 46673287e658 ecb52bb838c2 8b51eca867bc ff64f5b5c01a d971e5f7858d 810d336c4764 cdbc600fbbb4 0469132abd7c e723f75a5293 066727c3a39e 05233d01ab13 53d75f5be566 8e2b4c6925a4 b999b318bbc3 ed8d75c41830 e456f04c65a9 fa5f013636cd 1d8885933eb9 910604735e4b 2fa63c68886d
	I0914 23:41:45.045687    8967 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 23:41:45.154228    8967 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 23:41:45.159994    8967 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Sep 15 06:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Sep 15 06:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 15 06:41 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Sep 15 06:41 /etc/kubernetes/scheduler.conf
	
	I0914 23:41:45.160044    8967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/admin.conf
	I0914 23:41:45.163532    8967 kubeadm.go:163] "https://control-plane.minikube.internal:51345" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:41:45.163561    8967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 23:41:45.167031    8967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/kubelet.conf
	I0914 23:41:45.170507    8967 kubeadm.go:163] "https://control-plane.minikube.internal:51345" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:41:45.170541    8967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 23:41:45.174064    8967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/controller-manager.conf
	I0914 23:41:45.177373    8967 kubeadm.go:163] "https://control-plane.minikube.internal:51345" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:41:45.177404    8967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 23:41:45.180610    8967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/scheduler.conf
	I0914 23:41:45.183721    8967 kubeadm.go:163] "https://control-plane.minikube.internal:51345" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:41:45.183751    8967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 23:41:45.186406    8967 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 23:41:45.189335    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:41:45.223431    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:41:45.707615    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:41:45.943459    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:41:45.970294    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:41:45.993106    8967 api_server.go:52] waiting for apiserver process to appear ...
	I0914 23:41:45.993193    8967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:41:46.495563    8967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:41:46.995554    8967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:41:47.495573    8967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:41:47.995517    8967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:41:48.493930    8967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:41:48.995251    8967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:41:49.495266    8967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:41:49.499615    8967 api_server.go:72] duration metric: took 3.506577167s to wait for apiserver process to appear ...
	I0914 23:41:49.499623    8967 api_server.go:88] waiting for apiserver healthz status ...
	I0914 23:41:49.499633    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:41:54.501620    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:41:54.501643    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:41:59.501767    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:41:59.501806    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:04.502017    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:04.502051    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:09.502439    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:09.502552    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:14.503575    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:14.503669    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:19.504977    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:19.504998    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:24.506060    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:24.506085    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:29.507468    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:29.507507    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:34.509353    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:34.509374    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:39.511444    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:39.511460    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:44.513568    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:44.513587    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:49.515793    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:49.516208    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:42:49.550896    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:42:49.551056    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:42:49.569456    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:42:49.569558    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:42:49.584450    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:42:49.584546    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:42:49.596485    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:42:49.596573    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:42:49.607230    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:42:49.607304    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:42:49.617705    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:42:49.617788    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:42:49.628373    8967 logs.go:276] 0 containers: []
	W0914 23:42:49.628384    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:42:49.628453    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:42:49.641885    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:42:49.641903    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:42:49.641909    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:42:49.646926    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:42:49.646934    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:42:49.658337    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:42:49.658353    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:42:49.670317    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:42:49.670328    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:42:49.697354    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:42:49.697365    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:42:49.800811    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:42:49.800822    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:42:49.814691    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:42:49.814701    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:42:49.829324    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:42:49.829335    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:42:49.844631    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:42:49.844644    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:42:49.856328    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:42:49.856337    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:42:49.883852    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:42:49.883864    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:42:49.906748    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:42:49.906758    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:42:49.918576    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:42:49.918588    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:42:49.931216    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:42:49.931231    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:42:49.950864    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:42:49.950876    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:42:49.995739    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:42:49.995747    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:42:50.013062    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:42:50.013074    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:42:50.029154    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:42:50.029164    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:42:50.042105    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:42:50.042116    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:42:52.556025    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:57.558239    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:57.558766    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:42:57.600006    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:42:57.600133    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:42:57.616197    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:42:57.616298    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:42:57.629653    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:42:57.629776    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:42:57.642174    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:42:57.642248    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:42:57.653491    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:42:57.653568    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:42:57.664448    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:42:57.664531    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:42:57.674937    8967 logs.go:276] 0 containers: []
	W0914 23:42:57.674950    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:42:57.675025    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:42:57.685612    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:42:57.685628    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:42:57.685633    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:42:57.690519    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:42:57.690527    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:42:57.715030    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:42:57.715040    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:42:57.729258    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:42:57.729269    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:42:57.740858    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:42:57.740870    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:42:57.755573    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:42:57.755583    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:42:57.771069    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:42:57.771079    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:42:57.808091    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:42:57.808101    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:42:57.822519    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:42:57.822530    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:42:57.834264    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:42:57.834275    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:42:57.845917    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:42:57.845928    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:42:57.860797    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:42:57.860807    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:42:57.878136    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:42:57.878147    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:42:57.889840    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:42:57.889852    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:42:57.932227    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:42:57.932235    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:42:57.948135    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:42:57.948145    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:42:57.966379    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:42:57.966390    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:42:57.978394    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:42:57.978405    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:42:58.003648    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:42:58.003666    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:00.518126    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:05.520346    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:05.520610    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:05.543568    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:43:05.543714    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:05.558302    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:43:05.558403    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:05.570820    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:43:05.570905    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:05.581395    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:43:05.581473    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:05.592538    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:43:05.592615    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:05.603120    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:43:05.603204    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:05.613317    8967 logs.go:276] 0 containers: []
	W0914 23:43:05.613331    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:05.613403    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:05.625380    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:43:05.625396    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:43:05.625402    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:43:05.639583    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:43:05.639596    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:43:05.654378    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:43:05.654390    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:43:05.665972    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:43:05.665983    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:43:05.687745    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:05.687755    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:05.731196    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:43:05.731204    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:43:05.745450    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:43:05.745459    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:43:05.764989    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:43:05.764998    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:43:05.790043    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:43:05.790050    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:43:05.802557    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:43:05.802572    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:43:05.820351    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:43:05.820365    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:43:05.835273    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:05.835284    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:05.863162    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:05.863169    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:05.899784    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:43:05.899795    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:43:05.911119    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:43:05.911130    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:43:05.922886    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:43:05.922896    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:05.935850    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:05.935862    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:05.940582    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:43:05.940589    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:43:05.954522    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:43:05.954534    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:43:08.468489    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:13.470213    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:13.470386    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:13.483433    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:43:13.483524    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:13.494811    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:43:13.494899    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:13.505640    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:43:13.505731    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:13.516492    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:43:13.516568    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:13.527420    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:43:13.527512    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:13.538464    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:43:13.538547    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:13.549256    8967 logs.go:276] 0 containers: []
	W0914 23:43:13.549268    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:13.549337    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:13.559896    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:43:13.559912    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:43:13.559918    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:43:13.584853    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:43:13.584862    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:43:13.596639    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:43:13.596651    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:43:13.608189    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:43:13.608202    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:43:13.624786    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:43:13.624797    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:43:13.636226    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:43:13.636237    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:43:13.650108    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:43:13.650116    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:43:13.667447    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:13.667457    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:13.710595    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:43:13.710606    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:43:13.725386    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:43:13.725395    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:43:13.737239    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:43:13.737250    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:43:13.749647    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:13.749662    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:13.754680    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:13.754687    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:13.789651    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:43:13.789664    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:43:13.804351    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:43:13.804363    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:43:13.815536    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:43:13.815548    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:43:13.834724    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:43:13.834733    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:43:13.852695    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:13.852705    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:13.879687    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:43:13.879695    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:16.394398    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:21.396590    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:21.396722    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:21.407997    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:43:21.408086    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:21.419230    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:43:21.419327    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:21.430163    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:43:21.430250    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:21.441193    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:43:21.441266    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:21.451640    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:43:21.451725    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:21.462512    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:43:21.462592    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:21.472991    8967 logs.go:276] 0 containers: []
	W0914 23:43:21.473009    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:21.473076    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:21.483636    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:43:21.483655    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:21.483661    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:21.488346    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:43:21.488353    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:43:21.502022    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:43:21.502034    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:43:21.516500    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:21.516510    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:21.543160    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:43:21.543168    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:43:21.554591    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:43:21.554602    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:43:21.566242    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:43:21.566252    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:43:21.584914    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:43:21.584925    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:21.597270    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:21.597282    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:21.641138    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:43:21.641147    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:43:21.652490    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:43:21.652502    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:43:21.664562    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:43:21.664572    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:43:21.676150    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:21.676160    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:21.717028    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:43:21.717039    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:43:21.742226    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:43:21.742237    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:43:21.758466    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:43:21.758478    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:43:21.775455    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:43:21.775469    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:43:21.786631    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:43:21.786643    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:43:21.804187    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:43:21.804197    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:43:24.324365    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:29.326823    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:29.327284    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:29.360497    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:43:29.360660    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:29.380178    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:43:29.380303    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:29.398628    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:43:29.398715    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:29.410275    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:43:29.410361    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:29.421707    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:43:29.421788    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:29.432488    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:43:29.432576    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:29.443382    8967 logs.go:276] 0 containers: []
	W0914 23:43:29.443397    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:29.443475    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:29.457003    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:43:29.457022    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:43:29.457028    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:43:29.469744    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:43:29.469757    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:29.483029    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:43:29.483041    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:43:29.497231    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:43:29.497241    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:43:29.523117    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:43:29.523129    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:43:29.534792    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:43:29.534801    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:43:29.552721    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:43:29.552732    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:43:29.570078    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:43:29.570088    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:43:29.582075    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:29.582087    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:29.609382    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:29.609393    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:29.653342    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:29.653350    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:29.658215    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:29.658223    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:29.693910    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:43:29.693921    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:43:29.705529    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:43:29.705541    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:43:29.718730    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:43:29.718742    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:43:29.730880    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:43:29.730891    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:43:29.748641    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:43:29.748651    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:43:29.760284    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:43:29.760294    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:43:29.774099    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:43:29.774111    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:43:32.297751    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:37.300061    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:37.300390    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:37.335075    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:43:37.335216    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:37.356811    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:43:37.356901    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:37.369667    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:43:37.369732    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:37.388118    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:43:37.388205    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:37.399629    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:43:37.399711    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:37.410620    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:43:37.410701    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:37.421085    8967 logs.go:276] 0 containers: []
	W0914 23:43:37.421097    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:37.421169    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:37.431406    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:43:37.431422    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:43:37.431428    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:43:37.457091    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:43:37.457101    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:43:37.471254    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:43:37.471264    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:43:37.482221    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:43:37.482233    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:43:37.494397    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:37.494407    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:37.499314    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:43:37.499321    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:43:37.513331    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:43:37.513341    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:43:37.533826    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:43:37.533836    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:43:37.545188    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:43:37.545203    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:37.558704    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:43:37.558718    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:43:37.569885    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:43:37.569896    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:43:37.591853    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:37.591868    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:37.634060    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:43:37.634070    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:43:37.645542    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:43:37.645555    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:43:37.658660    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:43:37.658672    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:43:37.679939    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:43:37.679949    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:43:37.697156    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:37.697166    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:37.724096    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:37.724126    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:37.761027    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:43:37.761041    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:43:40.277572    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:45.280094    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:45.280241    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:45.297865    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:43:45.297965    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:45.311298    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:43:45.311381    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:45.322830    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:43:45.322919    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:45.333676    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:43:45.333758    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:45.344741    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:43:45.344825    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:45.355618    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:43:45.355708    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:45.366049    8967 logs.go:276] 0 containers: []
	W0914 23:43:45.366061    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:45.366132    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:45.376845    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:43:45.376860    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:43:45.376866    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:43:45.389466    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:45.389482    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:45.414589    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:43:45.414597    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:45.426599    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:43:45.426611    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:43:45.440999    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:43:45.441013    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:43:45.459666    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:43:45.459681    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:43:45.472491    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:43:45.472503    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:43:45.490435    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:43:45.490446    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:43:45.501894    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:43:45.501906    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:43:45.513240    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:43:45.513251    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:43:45.524591    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:45.524602    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:45.560711    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:43:45.560721    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:43:45.572892    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:43:45.572901    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:43:45.584513    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:43:45.584524    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:43:45.600143    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:43:45.600156    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:43:45.617295    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:43:45.617309    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:43:45.636474    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:45.636485    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:45.677051    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:45.677060    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:45.681624    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:43:45.681631    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:43:48.215214    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:53.217852    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:53.218058    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:53.239031    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:43:53.239145    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:53.258323    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:43:53.258416    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:53.269985    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:43:53.270070    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:53.280633    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:43:53.280704    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:53.291232    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:43:53.291314    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:53.302034    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:43:53.302123    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:53.312470    8967 logs.go:276] 0 containers: []
	W0914 23:43:53.312483    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:53.312559    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:53.322638    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:43:53.322654    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:43:53.322659    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:43:53.334683    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:43:53.334693    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:43:53.351812    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:43:53.351821    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:43:53.362977    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:53.362990    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:53.406936    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:43:53.406947    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:43:53.420992    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:43:53.421002    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:43:53.432113    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:43:53.432126    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:53.444213    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:53.444223    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:53.485885    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:53.485896    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:53.490972    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:53.490980    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:53.516691    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:43:53.516702    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:43:53.528550    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:43:53.528561    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:43:53.542958    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:43:53.542967    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:43:53.557591    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:43:53.557601    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:43:53.569864    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:43:53.569875    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:43:53.582179    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:43:53.582190    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:43:53.600701    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:43:53.600710    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:43:53.612151    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:43:53.612160    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:43:53.625905    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:43:53.625916    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:43:56.153514    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:01.155223    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:01.155504    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:01.183499    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:44:01.183649    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:01.201604    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:44:01.201705    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:01.214974    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:44:01.215065    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:01.226614    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:44:01.226695    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:01.238099    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:44:01.238182    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:01.248748    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:44:01.248838    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:01.258964    8967 logs.go:276] 0 containers: []
	W0914 23:44:01.258977    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:01.259052    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:01.271166    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:44:01.271182    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:44:01.271187    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:44:01.282956    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:44:01.282967    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:44:01.300420    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:01.300430    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:01.304927    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:44:01.304936    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:44:01.320716    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:44:01.320729    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:44:01.334905    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:44:01.334915    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:44:01.364671    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:44:01.364682    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:01.378905    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:01.378919    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:01.421636    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:01.421646    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:01.458063    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:44:01.458076    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:44:01.475227    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:44:01.475239    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:44:01.487311    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:44:01.487324    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:44:01.499467    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:01.499480    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:01.525537    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:44:01.525595    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:44:01.540858    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:44:01.540868    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:44:01.555937    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:44:01.555949    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:44:01.574246    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:44:01.574258    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:44:01.586185    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:44:01.586195    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:44:01.600445    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:44:01.600457    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:44:04.121720    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:09.123908    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:09.124025    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:09.141205    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:44:09.141296    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:09.151683    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:44:09.151767    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:09.161835    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:44:09.161917    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:09.172983    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:44:09.173060    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:09.183496    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:44:09.183588    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:09.193785    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:44:09.193872    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:09.204682    8967 logs.go:276] 0 containers: []
	W0914 23:44:09.204693    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:09.204761    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:09.215305    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:44:09.215320    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:44:09.215325    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:44:09.229398    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:44:09.229408    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:44:09.241488    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:44:09.241501    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:44:09.254107    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:44:09.254118    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:44:09.268482    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:44:09.268492    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:44:09.285471    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:44:09.285482    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:44:09.303290    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:09.303300    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:09.338239    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:44:09.338249    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:44:09.354932    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:44:09.354946    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:44:09.366445    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:09.366481    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:09.410486    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:44:09.410497    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:44:09.435454    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:44:09.435465    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:44:09.450358    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:44:09.450368    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:44:09.468302    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:44:09.468314    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:44:09.479191    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:09.479202    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:09.504053    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:09.504060    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:09.508163    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:44:09.508170    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:44:09.519621    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:44:09.519632    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:44:09.530828    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:44:09.530839    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:12.045613    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:17.046539    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:17.046894    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:17.088970    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:44:17.089127    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:17.107800    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:44:17.107898    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:17.123973    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:44:17.124064    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:17.135699    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:44:17.135782    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:17.148050    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:44:17.148142    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:17.162743    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:44:17.162831    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:17.174585    8967 logs.go:276] 0 containers: []
	W0914 23:44:17.174600    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:17.174673    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:17.187168    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:44:17.187185    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:44:17.187191    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:44:17.198672    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:17.198682    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:17.222906    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:44:17.222914    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:17.235433    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:17.235444    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:17.239982    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:44:17.239989    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:44:17.251961    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:44:17.251971    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:44:17.263669    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:17.263679    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:17.311097    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:44:17.311106    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:44:17.324862    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:44:17.324872    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:44:17.342843    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:44:17.342853    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:44:17.354801    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:44:17.354811    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:44:17.375703    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:44:17.375716    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:44:17.388047    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:44:17.388057    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:44:17.402583    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:44:17.402596    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:44:17.417563    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:44:17.417573    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:44:17.429574    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:44:17.429585    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:44:17.446978    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:17.446987    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:17.482509    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:44:17.482524    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:44:17.508165    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:44:17.508176    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:44:20.023229    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:25.023690    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:25.023966    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:25.057174    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:44:25.057333    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:25.074467    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:44:25.074566    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:25.088160    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:44:25.088255    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:25.099804    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:44:25.099891    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:25.110965    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:44:25.111048    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:25.123440    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:44:25.123525    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:25.133901    8967 logs.go:276] 0 containers: []
	W0914 23:44:25.133913    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:25.133977    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:25.144933    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:44:25.144948    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:25.144954    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:25.150038    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:44:25.150048    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:44:25.165370    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:44:25.165381    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:44:25.180576    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:44:25.180585    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:44:25.194083    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:44:25.194098    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:44:25.205695    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:25.205707    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:25.248904    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:25.248913    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:25.288404    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:44:25.288418    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:44:25.302559    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:44:25.302573    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:44:25.314193    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:44:25.314209    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:44:25.331577    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:44:25.331587    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:44:25.349795    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:25.349810    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:25.374302    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:44:25.374313    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:25.388021    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:44:25.388035    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:44:25.402948    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:44:25.402960    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:44:25.414370    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:44:25.414380    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:44:25.426050    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:44:25.426061    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:44:25.451783    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:44:25.451793    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:44:25.466521    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:44:25.466533    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:44:27.978947    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:32.981454    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:32.981722    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:33.008735    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:44:33.008848    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:33.026020    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:44:33.026107    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:33.037020    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:44:33.037103    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:33.048301    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:44:33.048387    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:33.058922    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:44:33.059006    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:33.069954    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:44:33.070026    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:33.080274    8967 logs.go:276] 0 containers: []
	W0914 23:44:33.080285    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:33.080357    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:33.090716    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:44:33.090729    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:33.090734    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:33.130600    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:44:33.130612    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:44:33.156983    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:44:33.156996    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:44:33.168932    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:44:33.168943    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:44:33.184558    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:44:33.184569    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:44:33.206067    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:44:33.206078    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:44:33.226884    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:33.226896    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:33.271952    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:44:33.271972    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:44:33.295948    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:44:33.295964    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:44:33.321604    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:44:33.321617    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:44:33.335980    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:44:33.335990    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:44:33.347244    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:44:33.347254    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:44:33.359314    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:44:33.359324    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:44:33.371309    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:44:33.371323    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:44:33.388808    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:44:33.388821    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:33.401300    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:33.401310    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:33.409271    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:44:33.409279    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:44:33.423245    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:44:33.423256    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:44:33.435326    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:33.435338    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:35.961730    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:40.964031    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:40.964404    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:40.996933    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:44:40.997107    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:41.018114    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:44:41.018224    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:41.032170    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:44:41.032256    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:41.047536    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:44:41.047616    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:41.058511    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:44:41.058594    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:41.069442    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:44:41.069523    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:41.084180    8967 logs.go:276] 0 containers: []
	W0914 23:44:41.084191    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:41.084257    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:41.095317    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:44:41.095335    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:41.095341    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:41.138832    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:44:41.138846    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:44:41.167720    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:44:41.167730    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:44:41.182147    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:44:41.182156    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:44:41.194525    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:44:41.194537    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:44:41.212523    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:44:41.212534    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:44:41.224393    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:41.224404    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:41.247788    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:41.247796    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:41.284609    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:44:41.284623    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:44:41.299502    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:44:41.299518    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:44:41.318154    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:44:41.318167    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:44:41.330379    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:44:41.330389    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:44:41.345065    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:44:41.345075    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:44:41.363735    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:44:41.363747    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:44:41.375037    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:41.375046    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:41.379389    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:44:41.379395    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:44:41.392739    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:44:41.392750    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:44:41.405542    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:44:41.405554    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:44:41.421354    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:44:41.421369    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:43.935192    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:48.937409    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:48.937652    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:48.965175    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:44:48.965307    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:48.984689    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:44:48.984790    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:49.000306    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:44:49.000394    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:49.010860    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:44:49.010941    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:49.021105    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:44:49.021192    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:49.031969    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:44:49.032062    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:49.046742    8967 logs.go:276] 0 containers: []
	W0914 23:44:49.046754    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:49.046818    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:49.057344    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:44:49.057359    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:49.057364    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:49.097378    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:44:49.097389    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:44:49.112708    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:44:49.112725    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:44:49.131984    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:44:49.131994    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:44:49.149963    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:44:49.149973    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:44:49.161236    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:49.161246    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:49.185696    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:49.185702    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:49.190285    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:44:49.190291    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:44:49.204442    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:44:49.204452    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:44:49.219024    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:44:49.219035    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:44:49.230681    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:44:49.230693    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:44:49.242789    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:44:49.242802    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:44:49.259882    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:49.259895    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:49.304262    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:44:49.304271    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:44:49.330145    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:44:49.330156    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:44:49.343110    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:44:49.343118    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:44:49.356898    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:44:49.356908    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:44:49.375340    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:44:49.375353    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:44:49.388563    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:44:49.388574    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:51.902688    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:56.904173    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:56.904502    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:56.934950    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:44:56.935084    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:56.952478    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:44:56.952590    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:56.967159    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:44:56.967255    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:56.978546    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:44:56.978634    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:56.988665    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:44:56.988743    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:56.999376    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:44:56.999459    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:57.009460    8967 logs.go:276] 0 containers: []
	W0914 23:44:57.009472    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:57.009543    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:57.020387    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:44:57.020403    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:57.020408    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:57.025533    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:44:57.025538    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:44:57.039580    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:44:57.039593    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:44:57.063881    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:44:57.063892    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:44:57.078735    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:44:57.078750    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:44:57.100301    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:57.100311    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:57.125805    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:44:57.125820    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:44:57.140142    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:44:57.140158    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:44:57.157369    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:44:57.157380    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:44:57.169006    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:57.169017    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:57.210950    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:57.210977    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:57.247238    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:44:57.247250    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:44:57.260955    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:44:57.260965    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:44:57.276469    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:44:57.276483    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:44:57.288244    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:44:57.288254    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:44:57.299840    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:44:57.299853    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:44:57.311448    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:44:57.311641    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:44:57.326987    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:44:57.327003    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:44:57.343636    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:44:57.343647    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:59.858461    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:04.860951    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:04.861240    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:45:04.891600    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:45:04.891746    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:45:04.908693    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:45:04.908789    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:45:04.921881    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:45:04.921959    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:45:04.932939    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:45:04.933005    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:45:04.943839    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:45:04.943923    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:45:04.954572    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:45:04.954645    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:45:04.968612    8967 logs.go:276] 0 containers: []
	W0914 23:45:04.968626    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:45:04.968695    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:45:04.979061    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:45:04.979078    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:45:04.979084    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:45:04.990163    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:45:04.990176    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:45:05.002133    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:45:05.002145    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:45:05.019002    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:45:05.019012    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:45:05.030597    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:45:05.030606    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:45:05.035230    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:45:05.035236    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:45:05.046490    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:45:05.046501    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:45:05.087993    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:45:05.088004    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:45:05.102162    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:45:05.102176    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:45:05.118597    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:45:05.118611    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:45:05.130332    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:45:05.130345    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:45:05.142526    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:45:05.142537    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:45:05.157843    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:45:05.157857    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:45:05.169530    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:45:05.169541    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:45:05.194313    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:45:05.194325    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:45:05.206141    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:45:05.206154    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:45:05.223079    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:45:05.223091    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:45:05.247317    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:45:05.247327    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:45:05.288205    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:45:05.288216    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:45:07.813351    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:12.814568    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:12.814845    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:45:12.840502    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:45:12.840640    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:45:12.859051    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:45:12.859165    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:45:12.872400    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:45:12.872490    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:45:12.884155    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:45:12.884239    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:45:12.894676    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:45:12.894751    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:45:12.905488    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:45:12.905566    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:45:12.915996    8967 logs.go:276] 0 containers: []
	W0914 23:45:12.916005    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:45:12.916068    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:45:12.929335    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:45:12.929359    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:45:12.929365    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:45:12.972087    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:45:12.972097    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:45:12.976454    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:45:12.976462    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:45:12.990244    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:45:12.990253    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:45:13.004103    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:45:13.004112    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:45:13.029041    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:45:13.029053    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:45:13.040627    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:45:13.040639    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:45:13.052231    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:45:13.052245    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:45:13.071415    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:45:13.071426    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:45:13.085887    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:45:13.085901    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:45:13.097706    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:45:13.097716    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:45:13.109781    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:45:13.109797    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:45:13.129538    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:45:13.129554    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:45:13.142780    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:45:13.142792    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:45:13.178239    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:45:13.178249    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:45:13.189637    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:45:13.189649    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:45:13.211013    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:45:13.211024    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:45:13.229047    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:45:13.229058    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:45:13.252780    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:45:13.252786    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:45:15.766776    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:20.769008    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:20.769255    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:45:20.794267    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:45:20.794389    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:45:20.808936    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:45:20.809031    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:45:20.820920    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:45:20.821003    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:45:20.831856    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:45:20.831938    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:45:20.843585    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:45:20.843660    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:45:20.854665    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:45:20.854755    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:45:20.864714    8967 logs.go:276] 0 containers: []
	W0914 23:45:20.864725    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:45:20.864791    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:45:20.875322    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:45:20.875339    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:45:20.875344    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:45:20.887872    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:45:20.887885    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:45:20.900993    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:45:20.901006    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:45:20.918699    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:45:20.918709    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:45:20.953641    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:45:20.953653    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:45:20.967415    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:45:20.967425    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:45:20.992408    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:45:20.992423    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:45:21.003905    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:45:21.003915    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:45:21.023619    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:45:21.023629    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:45:21.036109    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:45:21.036120    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:45:21.053690    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:45:21.053703    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:45:21.067835    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:45:21.067850    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:45:21.080580    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:45:21.080591    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:45:21.099559    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:45:21.099571    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:45:21.143892    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:45:21.143910    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:45:21.149006    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:45:21.149013    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:45:21.163943    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:45:21.163953    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:45:21.175966    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:45:21.175976    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:45:21.188081    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:45:21.188094    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:45:23.713383    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:28.715573    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:28.715738    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:45:28.730003    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:45:28.730097    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:45:28.741832    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:45:28.741915    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:45:28.752446    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:45:28.752524    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:45:28.763888    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:45:28.763972    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:45:28.774392    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:45:28.774475    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:45:28.785195    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:45:28.785268    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:45:28.796437    8967 logs.go:276] 0 containers: []
	W0914 23:45:28.796447    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:45:28.796515    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:45:28.807634    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:45:28.807649    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:45:28.807654    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:45:28.825761    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:45:28.825771    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:45:28.839565    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:45:28.839581    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:45:28.844507    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:45:28.844513    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:45:28.862728    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:45:28.862738    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:45:28.874011    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:45:28.874021    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:45:28.886419    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:45:28.886431    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:45:28.904568    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:45:28.904578    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:45:28.929178    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:45:28.929186    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:45:28.942576    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:45:28.942586    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:45:28.977670    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:45:28.977681    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:45:29.004798    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:45:29.004809    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:45:29.016439    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:45:29.016451    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:45:29.028519    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:45:29.028534    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:45:29.042976    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:45:29.042987    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:45:29.057848    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:45:29.057861    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:45:29.072480    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:45:29.072490    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:45:29.113597    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:45:29.113608    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:45:29.125328    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:45:29.125338    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:45:31.639215    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:36.641782    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:36.641992    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:45:36.661871    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:45:36.661989    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:45:36.675811    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:45:36.675908    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:45:36.688439    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:45:36.688531    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:45:36.701724    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:45:36.701808    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:45:36.712133    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:45:36.712213    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:45:36.730114    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:45:36.730204    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:45:36.742212    8967 logs.go:276] 0 containers: []
	W0914 23:45:36.742223    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:45:36.742294    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:45:36.753758    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:45:36.753775    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:45:36.753780    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:45:36.768100    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:45:36.768111    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:45:36.783758    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:45:36.783768    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:45:36.795784    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:45:36.795794    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:45:36.800625    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:45:36.800631    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:45:36.814016    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:45:36.814024    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:45:36.840468    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:45:36.840480    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:45:36.855010    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:45:36.855021    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:45:36.866767    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:45:36.866783    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:45:36.877791    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:45:36.877803    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:45:36.890356    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:45:36.890369    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:45:36.929727    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:45:36.929741    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:45:36.943868    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:45:36.943882    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:45:36.957816    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:45:36.957827    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:45:36.999284    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:45:36.999298    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:45:37.011734    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:45:37.011749    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:45:37.023945    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:45:37.023955    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:45:37.041973    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:45:37.041984    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:45:37.060128    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:45:37.060138    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:45:39.584003    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:44.586554    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:44.586684    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:45:44.598019    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:45:44.598115    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:45:44.609592    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:45:44.609687    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:45:44.621783    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:45:44.621861    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:45:44.633046    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:45:44.633136    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:45:44.644375    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:45:44.644459    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:45:44.655046    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:45:44.655133    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:45:44.665631    8967 logs.go:276] 0 containers: []
	W0914 23:45:44.665643    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:45:44.665714    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:45:44.677244    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:45:44.677261    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:45:44.677266    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:45:44.720674    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:45:44.720695    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:45:44.737306    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:45:44.737317    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:45:44.752160    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:45:44.752173    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:45:44.770891    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:45:44.770901    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:45:44.789709    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:45:44.789719    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:45:44.801258    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:45:44.801270    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:45:44.818127    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:45:44.818140    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:45:44.822608    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:45:44.822615    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:45:44.862640    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:45:44.862651    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:45:44.892343    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:45:44.892357    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:45:44.907874    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:45:44.907884    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:45:44.919436    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:45:44.919446    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:45:44.931069    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:45:44.931080    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:45:44.945023    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:45:44.945038    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:45:44.958595    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:45:44.958605    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:45:44.970348    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:45:44.970359    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:45:44.981894    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:45:44.981906    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:45:44.994872    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:45:44.994884    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:45:47.521802    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:52.522246    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:52.522314    8967 kubeadm.go:597] duration metric: took 4m7.51233775s to restartPrimaryControlPlane
	W0914 23:45:52.522373    8967 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 23:45:52.522406    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0914 23:45:53.608323    8967 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.085925667s)
	I0914 23:45:53.608424    8967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 23:45:53.613554    8967 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 23:45:53.616385    8967 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 23:45:53.619931    8967 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 23:45:53.619937    8967 kubeadm.go:157] found existing configuration files:
	
	I0914 23:45:53.619970    8967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/admin.conf
	I0914 23:45:53.623171    8967 kubeadm.go:163] "https://control-plane.minikube.internal:51345" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 23:45:53.623205    8967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 23:45:53.626172    8967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/kubelet.conf
	I0914 23:45:53.629346    8967 kubeadm.go:163] "https://control-plane.minikube.internal:51345" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 23:45:53.629385    8967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 23:45:53.632377    8967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/controller-manager.conf
	I0914 23:45:53.635756    8967 kubeadm.go:163] "https://control-plane.minikube.internal:51345" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 23:45:53.635793    8967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 23:45:53.638958    8967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/scheduler.conf
	I0914 23:45:53.641567    8967 kubeadm.go:163] "https://control-plane.minikube.internal:51345" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 23:45:53.641599    8967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 23:45:53.644461    8967 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 23:45:53.661283    8967 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0914 23:45:53.661312    8967 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 23:45:53.711251    8967 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 23:45:53.711315    8967 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 23:45:53.711404    8967 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 23:45:53.760001    8967 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 23:45:53.765340    8967 out.go:235]   - Generating certificates and keys ...
	I0914 23:45:53.765373    8967 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 23:45:53.765412    8967 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 23:45:53.765460    8967 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 23:45:53.765494    8967 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 23:45:53.765529    8967 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 23:45:53.765567    8967 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 23:45:53.765607    8967 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 23:45:53.765643    8967 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 23:45:53.765684    8967 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 23:45:53.765725    8967 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 23:45:53.765752    8967 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 23:45:53.765783    8967 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 23:45:53.818424    8967 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 23:45:53.897558    8967 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 23:45:53.975307    8967 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 23:45:54.095769    8967 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 23:45:54.129079    8967 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 23:45:54.129451    8967 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 23:45:54.129533    8967 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 23:45:54.216611    8967 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 23:45:54.220841    8967 out.go:235]   - Booting up control plane ...
	I0914 23:45:54.220894    8967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 23:45:54.220937    8967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 23:45:54.221658    8967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 23:45:54.221998    8967 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 23:45:54.222835    8967 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 23:45:58.724700    8967 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501784 seconds
	I0914 23:45:58.724770    8967 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 23:45:58.728505    8967 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 23:45:59.237566    8967 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 23:45:59.237698    8967 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-386000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 23:45:59.741749    8967 kubeadm.go:310] [bootstrap-token] Using token: cl4op1.2r209r77gn303h2a
	I0914 23:45:59.748950    8967 out.go:235]   - Configuring RBAC rules ...
	I0914 23:45:59.749017    8967 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 23:45:59.749063    8967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 23:45:59.754608    8967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 23:45:59.755458    8967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 23:45:59.756423    8967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 23:45:59.757336    8967 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 23:45:59.760629    8967 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 23:45:59.953060    8967 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 23:46:00.145269    8967 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 23:46:00.145875    8967 kubeadm.go:310] 
	I0914 23:46:00.145911    8967 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 23:46:00.145916    8967 kubeadm.go:310] 
	I0914 23:46:00.145970    8967 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 23:46:00.145975    8967 kubeadm.go:310] 
	I0914 23:46:00.145990    8967 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 23:46:00.146036    8967 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 23:46:00.146076    8967 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 23:46:00.146080    8967 kubeadm.go:310] 
	I0914 23:46:00.146108    8967 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 23:46:00.146111    8967 kubeadm.go:310] 
	I0914 23:46:00.146142    8967 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 23:46:00.146145    8967 kubeadm.go:310] 
	I0914 23:46:00.146173    8967 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 23:46:00.146214    8967 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 23:46:00.146261    8967 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 23:46:00.146264    8967 kubeadm.go:310] 
	I0914 23:46:00.146307    8967 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 23:46:00.146343    8967 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 23:46:00.146345    8967 kubeadm.go:310] 
	I0914 23:46:00.146386    8967 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cl4op1.2r209r77gn303h2a \
	I0914 23:46:00.146444    8967 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3496b266fd1cfe9142221ef290f09745f4c6a279684c03f4e3160434112e5d40 \
	I0914 23:46:00.146465    8967 kubeadm.go:310] 	--control-plane 
	I0914 23:46:00.146469    8967 kubeadm.go:310] 
	I0914 23:46:00.146525    8967 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 23:46:00.146534    8967 kubeadm.go:310] 
	I0914 23:46:00.146578    8967 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cl4op1.2r209r77gn303h2a \
	I0914 23:46:00.146650    8967 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3496b266fd1cfe9142221ef290f09745f4c6a279684c03f4e3160434112e5d40 
	I0914 23:46:00.146721    8967 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 23:46:00.146729    8967 cni.go:84] Creating CNI manager for ""
	I0914 23:46:00.146738    8967 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:46:00.155435    8967 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 23:46:00.159611    8967 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 23:46:00.162644    8967 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 23:46:00.167402    8967 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 23:46:00.167450    8967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:46:00.167467    8967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-386000 minikube.k8s.io/updated_at=2024_09_14T23_46_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a minikube.k8s.io/name=running-upgrade-386000 minikube.k8s.io/primary=true
	I0914 23:46:00.208621    8967 kubeadm.go:1113] duration metric: took 41.211375ms to wait for elevateKubeSystemPrivileges
	I0914 23:46:00.208650    8967 ops.go:34] apiserver oom_adj: -16
	I0914 23:46:00.208655    8967 kubeadm.go:394] duration metric: took 4m15.213571292s to StartCluster
	I0914 23:46:00.208665    8967 settings.go:142] acquiring lock: {Name:mk03c42e45b73d6f59721a178a8a31fc79d22668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:46:00.208758    8967 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:46:00.209188    8967 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/kubeconfig: {Name:mke334fd43bb51604954449e74caf7f81dee5b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:46:00.209389    8967 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:46:00.209444    8967 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 23:46:00.209483    8967 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-386000"
	I0914 23:46:00.209493    8967 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-386000"
	W0914 23:46:00.209496    8967 addons.go:243] addon storage-provisioner should already be in state true
	I0914 23:46:00.209505    8967 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-386000"
	I0914 23:46:00.209509    8967 host.go:66] Checking if "running-upgrade-386000" exists ...
	I0914 23:46:00.209514    8967 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-386000"
	I0914 23:46:00.209514    8967 config.go:182] Loaded profile config "running-upgrade-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 23:46:00.209796    8967 retry.go:31] will retry after 1.311000543s: connect: dial unix /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/running-upgrade-386000/monitor: connect: connection refused
	I0914 23:46:00.210467    8967 kapi.go:59] client config for running-upgrade-386000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/client.key", CAFile:"/Users/jenkins/minikube-integration/19644-6577/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106291800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 23:46:00.210616    8967 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-386000"
	W0914 23:46:00.210622    8967 addons.go:243] addon default-storageclass should already be in state true
	I0914 23:46:00.210629    8967 host.go:66] Checking if "running-upgrade-386000" exists ...
	I0914 23:46:00.211179    8967 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 23:46:00.211185    8967 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 23:46:00.211191    8967 sshutil.go:53] new ssh client: &{IP:localhost Port:51266 SSHKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/running-upgrade-386000/id_rsa Username:docker}
	I0914 23:46:00.213585    8967 out.go:177] * Verifying Kubernetes components...
	I0914 23:46:00.219608    8967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:46:00.315110    8967 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 23:46:00.319842    8967 api_server.go:52] waiting for apiserver process to appear ...
	I0914 23:46:00.319892    8967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:46:00.324000    8967 api_server.go:72] duration metric: took 114.602042ms to wait for apiserver process to appear ...
	I0914 23:46:00.324008    8967 api_server.go:88] waiting for apiserver healthz status ...
	I0914 23:46:00.324016    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:00.389537    8967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 23:46:00.695077    8967 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0914 23:46:00.695090    8967 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0914 23:46:01.527529    8967 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:46:01.531510    8967 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 23:46:01.531517    8967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 23:46:01.531525    8967 sshutil.go:53] new ssh client: &{IP:localhost Port:51266 SSHKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/running-upgrade-386000/id_rsa Username:docker}
	I0914 23:46:01.572903    8967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 23:46:05.325720    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:05.325753    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:10.325919    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:10.325961    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:15.326584    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:15.326603    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:20.326899    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:20.326929    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:25.327332    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:25.327370    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:30.328030    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:30.328068    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0914 23:46:30.696887    8967 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0914 23:46:30.701097    8967 out.go:177] * Enabled addons: storage-provisioner
	I0914 23:46:30.708951    8967 addons.go:510] duration metric: took 30.500117625s for enable addons: enabled=[storage-provisioner]
	I0914 23:46:35.328890    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:35.328930    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:40.330019    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:40.330057    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:45.331422    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:45.331449    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:50.332665    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:50.332694    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:55.332972    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:55.333026    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:00.335245    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:00.335425    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:00.346411    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:47:00.346498    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:00.357205    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:47:00.357291    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:00.367801    8967 logs.go:276] 2 containers: [d3fe5f4d4d8c 761ee539a978]
	I0914 23:47:00.367879    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:00.378210    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:47:00.378282    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:00.389090    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:47:00.389171    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:00.399558    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:47:00.399649    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:00.410712    8967 logs.go:276] 0 containers: []
	W0914 23:47:00.410723    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:00.410798    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:00.421668    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:47:00.421686    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:47:00.421691    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:47:00.433545    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:00.433559    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:00.458371    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:00.458379    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:00.494497    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:47:00.494509    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:47:00.512129    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:47:00.512139    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:47:00.525607    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:47:00.525618    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:47:00.537312    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:47:00.537326    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:47:00.548731    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:47:00.548747    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:47:00.564060    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:47:00.564071    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:00.575577    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:00.575592    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:00.612742    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:00.612753    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:00.617282    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:47:00.617288    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:47:00.635564    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:47:00.635575    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:47:03.151877    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:08.154009    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:08.154137    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:08.165538    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:47:08.165625    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:08.177718    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:47:08.177805    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:08.191193    8967 logs.go:276] 2 containers: [d3fe5f4d4d8c 761ee539a978]
	I0914 23:47:08.191269    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:08.201691    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:47:08.201772    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:08.212373    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:47:08.212456    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:08.222577    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:47:08.222665    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:08.233362    8967 logs.go:276] 0 containers: []
	W0914 23:47:08.233375    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:08.233452    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:08.244125    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:47:08.244157    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:47:08.244162    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:08.255439    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:08.255449    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:08.292485    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:47:08.292494    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:47:08.315726    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:47:08.315737    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:47:08.328187    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:47:08.328201    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:47:08.340817    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:47:08.340828    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:47:08.352490    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:47:08.352500    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:47:08.374121    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:08.374133    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:08.397262    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:08.397270    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:08.401613    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:08.401621    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:08.436320    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:47:08.436331    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:47:08.456159    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:47:08.456170    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:47:08.467893    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:47:08.467904    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:47:10.986514    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:15.995090    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:15.995298    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:16.016956    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:47:16.017049    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:16.030661    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:47:16.030750    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:16.043866    8967 logs.go:276] 2 containers: [d3fe5f4d4d8c 761ee539a978]
	I0914 23:47:16.043949    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:16.055617    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:47:16.055704    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:16.066591    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:47:16.066687    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:16.077263    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:47:16.077345    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:16.087295    8967 logs.go:276] 0 containers: []
	W0914 23:47:16.087306    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:16.087375    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:16.097669    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:47:16.097687    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:47:16.097692    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:47:16.119161    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:16.119174    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:16.144081    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:16.144089    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:16.179548    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:16.179558    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:16.184278    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:47:16.184284    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:47:16.198708    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:47:16.198717    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:47:16.217251    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:47:16.217268    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:47:16.233981    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:47:16.233996    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:47:16.249968    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:47:16.249982    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:16.261810    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:16.261825    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:16.297051    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:47:16.297064    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:47:16.312173    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:47:16.312188    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:47:16.324359    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:47:16.324371    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:47:18.839111    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:23.844342    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:23.844521    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:23.860258    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:47:23.860359    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:23.872398    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:47:23.872480    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:23.883251    8967 logs.go:276] 2 containers: [d3fe5f4d4d8c 761ee539a978]
	I0914 23:47:23.883333    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:23.894068    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:47:23.894154    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:23.904558    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:47:23.904644    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:23.916898    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:47:23.916981    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:23.927836    8967 logs.go:276] 0 containers: []
	W0914 23:47:23.927848    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:23.927920    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:23.938916    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:47:23.938931    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:47:23.938936    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:47:23.958921    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:47:23.958931    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:47:23.970841    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:47:23.970854    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:47:23.982692    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:47:23.982702    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:47:23.999887    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:23.999899    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:24.024361    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:47:24.024369    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:47:24.040874    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:24.040887    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:24.046041    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:24.046050    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:24.084121    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:47:24.084138    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:47:24.107882    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:47:24.107893    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:47:24.119985    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:47:24.119998    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:47:24.135736    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:47:24.135745    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:24.147758    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:24.147772    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:26.686242    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:31.690781    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:31.690964    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:31.709344    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:47:31.709454    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:31.724705    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:47:31.724789    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:31.737146    8967 logs.go:276] 2 containers: [d3fe5f4d4d8c 761ee539a978]
	I0914 23:47:31.737238    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:31.747659    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:47:31.747740    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:31.761466    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:47:31.761550    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:31.772337    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:47:31.772420    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:31.782563    8967 logs.go:276] 0 containers: []
	W0914 23:47:31.782573    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:31.782645    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:31.792897    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:47:31.792911    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:31.792917    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:31.797971    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:47:31.797978    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:47:31.814093    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:47:31.814109    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:47:31.828672    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:47:31.828683    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:47:31.845697    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:31.845711    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:31.869150    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:47:31.869158    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:31.880272    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:47:31.880282    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:47:31.891583    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:31.891598    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:31.926880    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:31.926891    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:31.962538    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:47:31.962554    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:47:31.974903    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:47:31.974912    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:47:31.986583    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:47:31.986597    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:47:32.002308    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:47:32.002319    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:47:34.516888    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:39.520566    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:39.520789    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:39.534751    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:47:39.534848    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:39.546059    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:47:39.546142    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:39.556458    8967 logs.go:276] 2 containers: [d3fe5f4d4d8c 761ee539a978]
	I0914 23:47:39.556540    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:39.568069    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:47:39.568144    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:39.578551    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:47:39.578628    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:39.592340    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:47:39.592414    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:39.602699    8967 logs.go:276] 0 containers: []
	W0914 23:47:39.602712    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:39.602777    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:39.617743    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:47:39.617759    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:39.617765    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:39.622724    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:47:39.622732    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:47:39.640447    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:47:39.640460    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:47:39.652530    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:47:39.652540    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:39.663799    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:39.663809    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:39.700286    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:39.700295    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:39.734892    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:47:39.734902    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:47:39.749326    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:47:39.749341    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:47:39.761070    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:47:39.761080    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:47:39.776895    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:47:39.776906    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:47:39.788633    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:47:39.788644    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:47:39.806337    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:47:39.806349    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:47:39.817853    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:39.817864    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:42.344935    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:47.345947    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:47.346093    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:47.357373    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:47:47.357451    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:47.368282    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:47:47.368367    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:47.378596    8967 logs.go:276] 2 containers: [d3fe5f4d4d8c 761ee539a978]
	I0914 23:47:47.378684    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:47.389303    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:47:47.389378    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:47.399802    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:47:47.399880    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:47.411192    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:47:47.411277    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:47.421767    8967 logs.go:276] 0 containers: []
	W0914 23:47:47.421778    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:47.421848    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:47.435952    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:47:47.435967    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:47:47.435972    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:47:47.450921    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:47:47.450934    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:47:47.463326    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:47.463341    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:47.498468    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:47.498475    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:47.532536    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:47:47.532547    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:47:47.544364    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:47:47.544376    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:47:47.560332    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:47:47.560343    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:47:47.571940    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:47:47.571951    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:47:47.593518    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:47:47.593528    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:47:47.606421    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:47.606429    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:47.630466    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:47.630475    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:47.634774    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:47:47.634783    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:47:47.648962    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:47:47.648972    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:50.163796    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:55.166467    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:55.166626    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:55.179143    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:47:55.179233    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:55.197796    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:47:55.197877    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:55.208732    8967 logs.go:276] 2 containers: [d3fe5f4d4d8c 761ee539a978]
	I0914 23:47:55.208819    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:55.219515    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:47:55.219608    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:55.231317    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:47:55.231394    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:55.242592    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:47:55.242682    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:55.253523    8967 logs.go:276] 0 containers: []
	W0914 23:47:55.253537    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:55.253610    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:55.264733    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:47:55.264748    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:55.264754    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:55.270024    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:55.270031    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:55.306080    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:47:55.306091    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:47:55.320561    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:47:55.320571    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:47:55.332696    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:47:55.332707    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:47:55.350853    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:47:55.350864    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:47:55.362813    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:55.362823    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:55.399271    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:47:55.399280    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:47:55.413596    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:47:55.413607    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:47:55.425600    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:47:55.425611    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:47:55.437308    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:47:55.437320    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:47:55.453315    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:55.453325    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:55.477148    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:47:55.477157    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:57.990159    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:02.992254    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:02.992417    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:03.003151    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:48:03.003239    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:03.014613    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:48:03.014692    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:03.025422    8967 logs.go:276] 2 containers: [d3fe5f4d4d8c 761ee539a978]
	I0914 23:48:03.025503    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:03.036039    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:48:03.036108    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:03.051989    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:48:03.052077    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:03.062564    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:48:03.062645    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:03.076692    8967 logs.go:276] 0 containers: []
	W0914 23:48:03.076704    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:03.076771    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:03.087537    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:48:03.087553    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:48:03.087559    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:03.099200    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:03.099210    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:03.134522    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:48:03.134538    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:48:03.149594    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:48:03.149607    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:48:03.162741    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:48:03.162752    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:48:03.175206    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:48:03.175222    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:48:03.187381    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:48:03.187397    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:48:03.206223    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:48:03.206233    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:48:03.218237    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:03.218251    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:03.253708    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:03.253717    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:03.258828    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:48:03.258836    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:48:03.272792    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:48:03.272805    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:48:03.288537    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:03.288548    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:05.816240    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:10.818569    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:10.818765    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:10.833697    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:48:10.833802    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:10.845446    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:48:10.845516    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:10.856186    8967 logs.go:276] 2 containers: [d3fe5f4d4d8c 761ee539a978]
	I0914 23:48:10.856256    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:10.866672    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:48:10.866750    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:10.877656    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:48:10.877745    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:10.896322    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:48:10.896406    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:10.907087    8967 logs.go:276] 0 containers: []
	W0914 23:48:10.907098    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:10.907159    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:10.926239    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:48:10.926254    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:10.926259    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:10.951053    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:10.951061    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:10.988041    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:10.988054    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:10.992619    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:48:10.992627    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:48:11.006669    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:48:11.006679    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:48:11.018347    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:48:11.018362    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:48:11.033743    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:48:11.033752    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:48:11.044934    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:48:11.044943    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:48:11.062692    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:48:11.062706    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:11.074104    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:11.074113    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:11.110653    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:48:11.110663    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:48:11.125028    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:48:11.125039    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:48:11.137003    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:48:11.137014    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:48:13.650488    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:18.652806    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:18.653028    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:18.670673    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:48:18.670764    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:18.684164    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:48:18.684261    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:18.695388    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:48:18.695478    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:18.705879    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:48:18.705953    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:18.716669    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:48:18.716750    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:18.727442    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:48:18.727521    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:18.737494    8967 logs.go:276] 0 containers: []
	W0914 23:48:18.737506    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:18.737583    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:18.749823    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:48:18.749839    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:48:18.749845    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:48:18.762290    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:48:18.762301    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:48:18.783754    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:48:18.783763    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:48:18.795661    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:48:18.795672    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:18.807277    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:48:18.807288    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:48:18.819512    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:48:18.819523    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:48:18.831268    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:48:18.831279    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:48:18.843326    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:18.843339    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:18.848077    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:18.848085    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:18.884903    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:48:18.884914    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:48:18.900319    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:48:18.900330    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:48:18.912322    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:18.912338    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:18.936120    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:18.936129    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:18.971241    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:48:18.971252    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:48:18.986109    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:48:18.986119    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:48:21.505481    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:26.507893    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:26.508087    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:26.525423    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:48:26.525524    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:26.539581    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:48:26.539674    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:26.550986    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:48:26.551075    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:26.561685    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:48:26.561760    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:26.572820    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:48:26.572898    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:26.583133    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:48:26.583215    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:26.594095    8967 logs.go:276] 0 containers: []
	W0914 23:48:26.594112    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:26.594173    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:26.608401    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:48:26.608417    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:48:26.608423    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:48:26.626535    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:48:26.626546    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:48:26.641425    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:48:26.641438    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:48:26.653002    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:48:26.653015    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:48:26.668316    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:48:26.668326    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:48:26.679575    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:48:26.679585    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:48:26.695022    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:26.695033    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:26.718591    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:26.718599    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:26.755405    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:48:26.755413    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:48:26.773034    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:48:26.773043    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:48:26.790652    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:48:26.790662    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:26.803900    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:26.803913    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:26.808337    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:26.808346    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:26.859587    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:48:26.859599    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:48:26.874161    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:48:26.874176    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:48:29.389702    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:34.391957    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:34.392062    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:34.407914    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:48:34.407999    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:34.418601    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:48:34.418685    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:34.429696    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:48:34.429781    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:34.440385    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:48:34.440459    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:34.451719    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:48:34.451805    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:34.462658    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:48:34.462742    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:34.473304    8967 logs.go:276] 0 containers: []
	W0914 23:48:34.473315    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:34.473377    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:34.483638    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:48:34.483655    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:34.483661    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:34.521120    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:34.521129    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:34.546164    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:48:34.546171    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:48:34.561230    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:48:34.561241    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:34.573131    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:34.573141    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:34.577982    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:34.577990    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:34.614022    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:48:34.614033    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:48:34.638616    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:48:34.638628    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:48:34.651210    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:48:34.651221    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:48:34.668384    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:48:34.668393    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:48:34.682784    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:48:34.682794    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:48:34.694094    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:48:34.694106    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:48:34.705563    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:48:34.705573    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:48:34.723580    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:48:34.723595    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:48:34.741593    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:48:34.741609    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:48:37.255093    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:42.257365    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:42.257576    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:42.271499    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:48:42.271599    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:42.283124    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:48:42.283209    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:42.294279    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:48:42.294366    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:42.304948    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:48:42.305028    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:42.316334    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:48:42.316415    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:42.328846    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:48:42.328927    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:42.338987    8967 logs.go:276] 0 containers: []
	W0914 23:48:42.338997    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:42.339062    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:42.349624    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:48:42.349640    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:48:42.349645    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:48:42.362157    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:48:42.362167    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:48:42.374038    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:48:42.374048    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:48:42.391352    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:48:42.391361    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:48:42.407306    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:48:42.407316    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:42.419352    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:42.419361    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:42.454980    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:42.454993    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:42.459615    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:48:42.459621    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:48:42.474493    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:48:42.474504    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:48:42.486611    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:48:42.486622    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:48:42.503956    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:42.503966    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:42.527742    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:42.527749    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:42.562473    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:48:42.562482    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:48:42.575314    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:48:42.575328    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:48:42.586854    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:48:42.586864    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:48:45.102659    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:50.104974    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:50.105154    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:50.122616    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:48:50.122717    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:50.134959    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:48:50.135037    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:50.146307    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:48:50.146390    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:50.157346    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:48:50.157425    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:50.168354    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:48:50.168436    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:50.179183    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:48:50.179263    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:50.189717    8967 logs.go:276] 0 containers: []
	W0914 23:48:50.189728    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:50.189797    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:50.200473    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:48:50.200490    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:48:50.200496    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:48:50.221296    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:48:50.221306    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:48:50.232563    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:50.232573    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:50.257991    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:48:50.258010    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:50.271324    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:50.271339    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:50.275694    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:48:50.275700    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:48:50.287758    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:48:50.287770    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:48:50.299667    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:48:50.299676    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:48:50.317182    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:48:50.317193    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:48:50.332154    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:48:50.332164    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:48:50.345087    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:48:50.345098    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:48:50.357235    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:50.357246    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:50.392538    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:50.392546    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:50.427369    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:48:50.427381    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:48:50.441559    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:48:50.441570    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:48:52.953591    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:57.955971    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:57.956195    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:57.983938    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:48:57.984030    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:57.996213    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:48:57.996300    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:58.008860    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:48:58.008965    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:58.019361    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:48:58.019441    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:58.029933    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:48:58.030007    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:58.040268    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:48:58.040359    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:58.051136    8967 logs.go:276] 0 containers: []
	W0914 23:48:58.051148    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:58.051210    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:58.063575    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:48:58.063592    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:58.063600    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:58.100688    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:58.100698    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:58.124345    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:48:58.124353    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:58.139579    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:48:58.139591    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:48:58.150911    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:58.150921    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:58.187419    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:58.187427    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:58.191948    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:48:58.191957    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:48:58.205951    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:48:58.205962    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:48:58.218424    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:48:58.218439    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:48:58.235981    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:48:58.235991    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:48:58.247878    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:48:58.247888    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:48:58.263727    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:48:58.263737    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:48:58.275371    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:48:58.275380    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:48:58.289297    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:48:58.289312    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:48:58.301414    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:48:58.301425    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:49:00.816010    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:05.818340    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:05.818576    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:05.837963    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:49:05.838079    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:05.852255    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:49:05.852355    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:05.864907    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:49:05.864993    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:05.875185    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:49:05.875260    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:05.885846    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:49:05.885935    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:05.897219    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:49:05.897303    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:05.907869    8967 logs.go:276] 0 containers: []
	W0914 23:49:05.907879    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:05.907946    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:05.920577    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:49:05.920595    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:05.920602    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:05.962553    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:49:05.962564    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:49:05.977079    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:49:05.977091    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:49:05.992037    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:05.992047    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:05.996652    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:49:05.996659    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:49:06.012161    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:49:06.012174    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:06.027195    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:49:06.027205    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:49:06.039213    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:49:06.039224    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:49:06.051524    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:49:06.051538    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:49:06.071569    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:49:06.071582    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:49:06.090889    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:06.090900    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:06.114282    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:06.114290    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:06.148660    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:49:06.148671    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:49:06.171029    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:49:06.171043    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:49:06.182978    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:49:06.182988    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:49:08.700638    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:13.702889    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:13.703108    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:13.723121    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:49:13.723238    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:13.739692    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:49:13.739792    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:13.752733    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:49:13.752818    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:13.763250    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:49:13.763331    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:13.774609    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:49:13.774694    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:13.784926    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:49:13.785010    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:13.795187    8967 logs.go:276] 0 containers: []
	W0914 23:49:13.795199    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:13.795270    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:13.805302    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:49:13.805318    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:49:13.805324    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:49:13.820227    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:49:13.820241    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:49:13.832861    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:13.832872    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:13.856697    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:13.856706    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:13.861296    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:49:13.861301    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:49:13.877400    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:49:13.877416    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:49:13.889856    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:49:13.889867    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:49:13.902024    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:49:13.902038    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:49:13.920010    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:49:13.920020    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:13.933509    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:49:13.933520    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:49:13.951969    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:13.951984    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:13.989086    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:49:13.989099    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:49:14.001721    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:49:14.001732    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:49:14.019653    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:49:14.019664    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:49:14.036689    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:14.036700    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:16.575488    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:21.577721    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:21.577908    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:21.595366    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:49:21.595466    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:21.608982    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:49:21.609072    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:21.620746    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:49:21.620834    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:21.631449    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:49:21.631531    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:21.642247    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:49:21.642328    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:21.653088    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:49:21.653171    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:21.663108    8967 logs.go:276] 0 containers: []
	W0914 23:49:21.663119    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:21.663194    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:21.673479    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:49:21.673498    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:49:21.673504    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:49:21.690842    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:49:21.690856    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:49:21.702700    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:49:21.702713    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:49:21.719700    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:49:21.719714    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:49:21.737296    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:49:21.737305    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:21.749106    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:49:21.749116    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:49:21.766421    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:21.766434    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:21.789447    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:49:21.789456    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:49:21.801570    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:21.801583    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:21.806644    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:49:21.806652    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:49:21.819783    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:49:21.819794    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:49:21.835396    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:21.835409    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:21.871317    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:49:21.871325    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:49:21.882543    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:49:21.882553    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:49:21.894584    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:21.894593    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:24.432939    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:29.433676    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:29.433873    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:29.448082    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:49:29.448184    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:29.465696    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:49:29.465779    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:29.477173    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:49:29.477253    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:29.488026    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:49:29.488117    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:29.498754    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:49:29.498827    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:29.508777    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:49:29.508849    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:29.518617    8967 logs.go:276] 0 containers: []
	W0914 23:49:29.518628    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:29.518705    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:29.529724    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:49:29.529743    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:29.529749    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:29.564630    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:29.564638    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:29.600065    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:49:29.600075    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:49:29.614552    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:49:29.614568    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:29.626332    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:49:29.626347    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:49:29.638935    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:49:29.638947    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:49:29.651175    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:49:29.651190    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:49:29.673879    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:49:29.673895    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:49:29.687804    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:49:29.687821    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:49:29.703557    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:29.703570    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:29.726607    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:49:29.726616    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:49:29.738626    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:49:29.738635    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:49:29.750336    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:29.750350    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:29.755012    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:49:29.755018    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:49:29.769627    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:49:29.769641    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:49:32.283481    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:37.285492    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:37.285650    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:37.296481    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:49:37.296566    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:37.306922    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:49:37.307004    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:37.321664    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:49:37.321750    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:37.332588    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:49:37.332671    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:37.343708    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:49:37.343793    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:37.354601    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:49:37.354687    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:37.365309    8967 logs.go:276] 0 containers: []
	W0914 23:49:37.365326    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:37.365397    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:37.375915    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:49:37.375938    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:49:37.375944    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:49:37.391460    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:49:37.391470    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:49:37.409891    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:49:37.409903    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:49:37.424206    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:49:37.424215    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:49:37.435735    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:49:37.435750    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:49:37.449148    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:37.449158    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:37.484416    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:49:37.484432    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:49:37.498932    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:37.498942    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:37.503333    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:49:37.503340    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:49:37.515324    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:49:37.515335    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:49:37.530077    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:49:37.530086    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:49:37.542439    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:37.542450    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:37.566706    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:49:37.566715    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:37.578193    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:37.578208    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:37.614889    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:49:37.614899    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:49:40.130527    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:45.132720    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:45.132955    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:45.161586    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:49:45.161694    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:45.174722    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:49:45.174809    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:45.186323    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:49:45.186410    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:45.196735    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:49:45.196823    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:45.207317    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:49:45.207391    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:45.217575    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:49:45.217663    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:45.227934    8967 logs.go:276] 0 containers: []
	W0914 23:49:45.227944    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:45.228010    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:45.240721    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:49:45.240738    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:45.240744    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:45.245864    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:49:45.245873    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:49:45.258286    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:49:45.258297    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:45.270333    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:49:45.270344    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:49:45.282597    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:49:45.282609    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:49:45.304735    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:45.304746    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:45.350507    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:49:45.350522    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:49:45.364978    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:49:45.364991    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:49:45.377683    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:49:45.377695    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:49:45.393874    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:49:45.393887    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:49:45.406270    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:45.406281    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:45.442269    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:49:45.442280    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:49:45.456719    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:49:45.456732    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:49:45.469189    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:49:45.469200    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:49:45.481538    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:45.481553    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:48.007150    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:53.009435    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:53.009559    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:53.021185    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:49:53.021268    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:53.031534    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:49:53.031618    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:53.041925    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:49:53.042000    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:53.052694    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:49:53.052766    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:53.062925    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:49:53.063015    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:53.073635    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:49:53.073718    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:53.084064    8967 logs.go:276] 0 containers: []
	W0914 23:49:53.084077    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:53.084148    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:53.100132    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:49:53.100150    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:53.100158    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:53.123665    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:49:53.123675    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:49:53.137433    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:49:53.137444    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:49:53.149112    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:49:53.149124    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:49:53.161824    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:49:53.161837    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:53.175268    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:49:53.175279    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:49:53.195811    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:53.195821    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:53.231669    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:49:53.231680    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:49:53.246129    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:49:53.246140    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:49:53.257707    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:49:53.257721    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:49:53.269514    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:49:53.269529    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:49:53.281872    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:53.281883    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:53.317385    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:53.317395    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:53.322471    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:49:53.322478    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:49:53.334250    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:49:53.334262    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:49:55.857515    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:50:00.859708    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:50:00.864376    8967 out.go:201] 
	W0914 23:50:00.868189    8967 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0914 23:50:00.868199    8967 out.go:270] * 
	* 
	W0914 23:50:00.868930    8967 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:50:00.877330    8967 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-386000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-09-14 23:50:00.975516 -0700 PDT m=+1276.153959584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-386000 -n running-upgrade-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-386000 -n running-upgrade-386000: exit status 2 (15.699506958s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-386000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-262000 sudo                                | cilium-262000             | jenkins | v1.34.0 | 14 Sep 24 23:39 PDT |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-262000 sudo                                | cilium-262000             | jenkins | v1.34.0 | 14 Sep 24 23:39 PDT |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-262000 sudo cat                            | cilium-262000             | jenkins | v1.34.0 | 14 Sep 24 23:39 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-262000 sudo cat                            | cilium-262000             | jenkins | v1.34.0 | 14 Sep 24 23:39 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-262000 sudo                                | cilium-262000             | jenkins | v1.34.0 | 14 Sep 24 23:39 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-262000 sudo                                | cilium-262000             | jenkins | v1.34.0 | 14 Sep 24 23:39 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-262000 sudo                                | cilium-262000             | jenkins | v1.34.0 | 14 Sep 24 23:39 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-262000 sudo cat                            | cilium-262000             | jenkins | v1.34.0 | 14 Sep 24 23:39 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-262000 sudo cat                            | cilium-262000             | jenkins | v1.34.0 | 14 Sep 24 23:39 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-262000 sudo                                | cilium-262000             | jenkins | v1.34.0 | 14 Sep 24 23:39 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-262000 sudo                                | cilium-262000             | jenkins | v1.34.0 | 14 Sep 24 23:39 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-262000 sudo                                | cilium-262000             | jenkins | v1.34.0 | 14 Sep 24 23:39 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-262000 sudo find                           | cilium-262000             | jenkins | v1.34.0 | 14 Sep 24 23:39 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-262000 sudo crio                           | cilium-262000             | jenkins | v1.34.0 | 14 Sep 24 23:39 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-262000                                     | cilium-262000             | jenkins | v1.34.0 | 14 Sep 24 23:39 PDT | 14 Sep 24 23:39 PDT |
	| start   | -p kubernetes-upgrade-838000                         | kubernetes-upgrade-838000 | jenkins | v1.34.0 | 14 Sep 24 23:39 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-506000                             | offline-docker-506000     | jenkins | v1.34.0 | 14 Sep 24 23:40 PDT | 14 Sep 24 23:40 PDT |
	| start   | -p stopped-upgrade-438000                            | minikube                  | jenkins | v1.26.0 | 14 Sep 24 23:40 PDT | 14 Sep 24 23:40 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-838000                         | kubernetes-upgrade-838000 | jenkins | v1.34.0 | 14 Sep 24 23:40 PDT | 14 Sep 24 23:40 PDT |
	| start   | -p kubernetes-upgrade-838000                         | kubernetes-upgrade-838000 | jenkins | v1.34.0 | 14 Sep 24 23:40 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-838000                         | kubernetes-upgrade-838000 | jenkins | v1.34.0 | 14 Sep 24 23:40 PDT | 14 Sep 24 23:40 PDT |
	| start   | -p running-upgrade-386000                            | minikube                  | jenkins | v1.26.0 | 14 Sep 24 23:40 PDT | 14 Sep 24 23:41 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-438000 stop                          | minikube                  | jenkins | v1.26.0 | 14 Sep 24 23:40 PDT | 14 Sep 24 23:41 PDT |
	| start   | -p stopped-upgrade-438000                            | stopped-upgrade-438000    | jenkins | v1.34.0 | 14 Sep 24 23:41 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-386000                            | running-upgrade-386000    | jenkins | v1.34.0 | 14 Sep 24 23:41 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 23:41:17
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 23:41:17.966512    8967 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:41:17.966701    8967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:41:17.966704    8967 out.go:358] Setting ErrFile to fd 2...
	I0914 23:41:17.966707    8967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:41:17.966825    8967 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:41:17.967759    8967 out.go:352] Setting JSON to false
	I0914 23:41:17.984317    8967 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6046,"bootTime":1726376431,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:41:17.984442    8967 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:41:17.988565    8967 out.go:177] * [running-upgrade-386000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:41:17.996576    8967 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:41:17.996598    8967 notify.go:220] Checking for updates...
	I0914 23:41:18.004477    8967 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:41:18.008577    8967 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:41:18.009967    8967 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:41:18.012533    8967 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:41:18.015525    8967 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:41:18.018890    8967 config.go:182] Loaded profile config "running-upgrade-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 23:41:18.021523    8967 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0914 23:41:18.024532    8967 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:41:18.027564    8967 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 23:41:18.034563    8967 start.go:297] selected driver: qemu2
	I0914 23:41:18.034569    8967 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51345 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-386000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0914 23:41:18.034622    8967 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:41:18.036883    8967 cni.go:84] Creating CNI manager for ""
	I0914 23:41:18.036922    8967 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:41:18.036946    8967 start.go:340] cluster config:
	{Name:running-upgrade-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51345 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-386000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0914 23:41:18.036991    8967 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:41:18.045522    8967 out.go:177] * Starting "running-upgrade-386000" primary control-plane node in "running-upgrade-386000" cluster
	I0914 23:41:18.049533    8967 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0914 23:41:18.049549    8967 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0914 23:41:18.049558    8967 cache.go:56] Caching tarball of preloaded images
	I0914 23:41:18.049614    8967 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:41:18.049621    8967 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0914 23:41:18.049682    8967 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/config.json ...
	I0914 23:41:18.050062    8967 start.go:360] acquireMachinesLock for running-upgrade-386000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:41:30.317947    8967 start.go:364] duration metric: took 12.268107834s to acquireMachinesLock for "running-upgrade-386000"
	I0914 23:41:30.317964    8967 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:41:30.317975    8967 fix.go:54] fixHost starting: 
	I0914 23:41:30.318717    8967 fix.go:112] recreateIfNeeded on running-upgrade-386000: state=Running err=<nil>
	W0914 23:41:30.318728    8967 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:41:30.323476    8967 out.go:177] * Updating the running qemu2 "running-upgrade-386000" VM ...
	I0914 23:41:29.328190    8956 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/config.json ...
	I0914 23:41:29.328760    8956 machine.go:93] provisionDockerMachine start ...
	I0914 23:41:29.328927    8956 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:29.329368    8956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103371190] 0x1033739d0 <nil>  [] 0s} localhost 51229 <nil> <nil>}
	I0914 23:41:29.329381    8956 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 23:41:29.418253    8956 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 23:41:29.418283    8956 buildroot.go:166] provisioning hostname "stopped-upgrade-438000"
	I0914 23:41:29.418448    8956 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:29.418718    8956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103371190] 0x1033739d0 <nil>  [] 0s} localhost 51229 <nil> <nil>}
	I0914 23:41:29.418733    8956 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-438000 && echo "stopped-upgrade-438000" | sudo tee /etc/hostname
	I0914 23:41:29.501331    8956 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-438000
	
	I0914 23:41:29.501422    8956 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:29.501608    8956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103371190] 0x1033739d0 <nil>  [] 0s} localhost 51229 <nil> <nil>}
	I0914 23:41:29.501620    8956 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-438000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-438000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-438000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 23:41:29.571316    8956 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 23:41:29.571327    8956 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19644-6577/.minikube CaCertPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19644-6577/.minikube}
	I0914 23:41:29.571339    8956 buildroot.go:174] setting up certificates
	I0914 23:41:29.571345    8956 provision.go:84] configureAuth start
	I0914 23:41:29.571350    8956 provision.go:143] copyHostCerts
	I0914 23:41:29.571412    8956 exec_runner.go:144] found /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.pem, removing ...
	I0914 23:41:29.571431    8956 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.pem
	I0914 23:41:29.571541    8956 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.pem (1082 bytes)
	I0914 23:41:29.571723    8956 exec_runner.go:144] found /Users/jenkins/minikube-integration/19644-6577/.minikube/cert.pem, removing ...
	I0914 23:41:29.571726    8956 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19644-6577/.minikube/cert.pem
	I0914 23:41:29.571769    8956 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19644-6577/.minikube/cert.pem (1123 bytes)
	I0914 23:41:29.571887    8956 exec_runner.go:144] found /Users/jenkins/minikube-integration/19644-6577/.minikube/key.pem, removing ...
	I0914 23:41:29.571890    8956 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19644-6577/.minikube/key.pem
	I0914 23:41:29.571932    8956 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19644-6577/.minikube/key.pem (1679 bytes)
	I0914 23:41:29.572029    8956 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-438000 san=[127.0.0.1 localhost minikube stopped-upgrade-438000]
	I0914 23:41:29.641290    8956 provision.go:177] copyRemoteCerts
	I0914 23:41:29.641340    8956 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 23:41:29.641349    8956 sshutil.go:53] new ssh client: &{IP:localhost Port:51229 SSHKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/stopped-upgrade-438000/id_rsa Username:docker}
	I0914 23:41:29.675478    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 23:41:29.682190    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0914 23:41:29.689043    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 23:41:29.696364    8956 provision.go:87] duration metric: took 125.011917ms to configureAuth
	I0914 23:41:29.696374    8956 buildroot.go:189] setting minikube options for container-runtime
	I0914 23:41:29.696479    8956 config.go:182] Loaded profile config "stopped-upgrade-438000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 23:41:29.696526    8956 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:29.696609    8956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103371190] 0x1033739d0 <nil>  [] 0s} localhost 51229 <nil> <nil>}
	I0914 23:41:29.696614    8956 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0914 23:41:29.761412    8956 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0914 23:41:29.761425    8956 buildroot.go:70] root file system type: tmpfs
	I0914 23:41:29.761480    8956 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0914 23:41:29.761529    8956 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:29.761627    8956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103371190] 0x1033739d0 <nil>  [] 0s} localhost 51229 <nil> <nil>}
	I0914 23:41:29.761662    8956 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0914 23:41:29.830745    8956 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0914 23:41:29.830810    8956 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:29.830922    8956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103371190] 0x1033739d0 <nil>  [] 0s} localhost 51229 <nil> <nil>}
	I0914 23:41:29.830930    8956 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0914 23:41:30.204597    8956 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0914 23:41:30.204612    8956 machine.go:96] duration metric: took 875.861209ms to provisionDockerMachine
	I0914 23:41:30.204619    8956 start.go:293] postStartSetup for "stopped-upgrade-438000" (driver="qemu2")
	I0914 23:41:30.204625    8956 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 23:41:30.204698    8956 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 23:41:30.204709    8956 sshutil.go:53] new ssh client: &{IP:localhost Port:51229 SSHKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/stopped-upgrade-438000/id_rsa Username:docker}
	I0914 23:41:30.239334    8956 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 23:41:30.240768    8956 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 23:41:30.240778    8956 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19644-6577/.minikube/addons for local assets ...
	I0914 23:41:30.240867    8956 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19644-6577/.minikube/files for local assets ...
	I0914 23:41:30.240965    8956 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19644-6577/.minikube/files/etc/ssl/certs/70932.pem -> 70932.pem in /etc/ssl/certs
	I0914 23:41:30.241087    8956 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 23:41:30.244362    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/files/etc/ssl/certs/70932.pem --> /etc/ssl/certs/70932.pem (1708 bytes)
	I0914 23:41:30.252029    8956 start.go:296] duration metric: took 47.403916ms for postStartSetup
	I0914 23:41:30.252049    8956 fix.go:56] duration metric: took 21.406067334s for fixHost
	I0914 23:41:30.252102    8956 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:30.252222    8956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103371190] 0x1033739d0 <nil>  [] 0s} localhost 51229 <nil> <nil>}
	I0914 23:41:30.252229    8956 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 23:41:30.317823    8956 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726382490.633243629
	
	I0914 23:41:30.317833    8956 fix.go:216] guest clock: 1726382490.633243629
	I0914 23:41:30.317838    8956 fix.go:229] Guest: 2024-09-14 23:41:30.633243629 -0700 PDT Remote: 2024-09-14 23:41:30.252051 -0700 PDT m=+21.518313959 (delta=381.192629ms)
	I0914 23:41:30.317850    8956 fix.go:200] guest clock delta is within tolerance: 381.192629ms
	I0914 23:41:30.317852    8956 start.go:83] releasing machines lock for "stopped-upgrade-438000", held for 21.471879333s
	I0914 23:41:30.317940    8956 ssh_runner.go:195] Run: cat /version.json
	I0914 23:41:30.317949    8956 sshutil.go:53] new ssh client: &{IP:localhost Port:51229 SSHKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/stopped-upgrade-438000/id_rsa Username:docker}
	I0914 23:41:30.317955    8956 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 23:41:30.317971    8956 sshutil.go:53] new ssh client: &{IP:localhost Port:51229 SSHKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/stopped-upgrade-438000/id_rsa Username:docker}
	W0914 23:41:30.318613    8956 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:51446->127.0.0.1:51229: write: broken pipe
	I0914 23:41:30.318633    8956 retry.go:31] will retry after 306.629026ms: ssh: handshake failed: write tcp 127.0.0.1:51446->127.0.0.1:51229: write: broken pipe
	W0914 23:41:30.352896    8956 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0914 23:41:30.352971    8956 ssh_runner.go:195] Run: systemctl --version
	I0914 23:41:30.355294    8956 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 23:41:30.357291    8956 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 23:41:30.357347    8956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0914 23:41:30.360675    8956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0914 23:41:30.365647    8956 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 23:41:30.365669    8956 start.go:495] detecting cgroup driver to use...
	I0914 23:41:30.365852    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 23:41:30.373158    8956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0914 23:41:30.376300    8956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 23:41:30.379433    8956 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 23:41:30.379468    8956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 23:41:30.382442    8956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 23:41:30.385480    8956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 23:41:30.388717    8956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 23:41:30.392167    8956 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 23:41:30.395969    8956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 23:41:30.399163    8956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0914 23:41:30.401828    8956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0914 23:41:30.405012    8956 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 23:41:30.408383    8956 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 23:41:30.411473    8956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:41:30.476234    8956 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 23:41:30.482760    8956 start.go:495] detecting cgroup driver to use...
	I0914 23:41:30.482833    8956 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0914 23:41:30.488971    8956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 23:41:30.496244    8956 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 23:41:30.502733    8956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 23:41:30.507677    8956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 23:41:30.512225    8956 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 23:41:30.552018    8956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 23:41:30.557668    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 23:41:30.563573    8956 ssh_runner.go:195] Run: which cri-dockerd
	I0914 23:41:30.564865    8956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0914 23:41:30.569150    8956 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0914 23:41:30.575674    8956 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0914 23:41:30.653528    8956 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0914 23:41:30.724099    8956 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0914 23:41:30.724157    8956 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0914 23:41:30.731689    8956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:41:30.804416    8956 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 23:41:31.930539    8956 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.12611625s)
	I0914 23:41:31.930653    8956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0914 23:41:31.935939    8956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0914 23:41:31.941156    8956 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0914 23:41:32.015581    8956 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 23:41:32.083271    8956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:41:32.143314    8956 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0914 23:41:32.148734    8956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0914 23:41:32.153524    8956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:41:32.222710    8956 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0914 23:41:32.260411    8956 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0914 23:41:32.260513    8956 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0914 23:41:32.263204    8956 start.go:563] Will wait 60s for crictl version
	I0914 23:41:32.263263    8956 ssh_runner.go:195] Run: which crictl
	I0914 23:41:32.264686    8956 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 23:41:32.279410    8956 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0914 23:41:32.279491    8956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 23:41:32.295307    8956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 23:41:30.330513    8967 machine.go:93] provisionDockerMachine start ...
	I0914 23:41:30.330624    8967 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:30.330789    8967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104cb9190] 0x104cbb9d0 <nil>  [] 0s} localhost 51266 <nil> <nil>}
	I0914 23:41:30.330793    8967 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 23:41:30.403800    8967 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-386000
	
	I0914 23:41:30.403817    8967 buildroot.go:166] provisioning hostname "running-upgrade-386000"
	I0914 23:41:30.403858    8967 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:30.403982    8967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104cb9190] 0x104cbb9d0 <nil>  [] 0s} localhost 51266 <nil> <nil>}
	I0914 23:41:30.403989    8967 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-386000 && echo "running-upgrade-386000" | sudo tee /etc/hostname
	I0914 23:41:30.481936    8967 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-386000
	
	I0914 23:41:30.482001    8967 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:30.482133    8967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104cb9190] 0x104cbb9d0 <nil>  [] 0s} localhost 51266 <nil> <nil>}
	I0914 23:41:30.482142    8967 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-386000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-386000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-386000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 23:41:30.556964    8967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 23:41:30.556979    8967 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19644-6577/.minikube CaCertPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19644-6577/.minikube}
	I0914 23:41:30.556988    8967 buildroot.go:174] setting up certificates
	I0914 23:41:30.556998    8967 provision.go:84] configureAuth start
	I0914 23:41:30.557005    8967 provision.go:143] copyHostCerts
	I0914 23:41:30.557075    8967 exec_runner.go:144] found /Users/jenkins/minikube-integration/19644-6577/.minikube/cert.pem, removing ...
	I0914 23:41:30.557084    8967 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19644-6577/.minikube/cert.pem
	I0914 23:41:30.557194    8967 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19644-6577/.minikube/cert.pem (1123 bytes)
	I0914 23:41:30.557362    8967 exec_runner.go:144] found /Users/jenkins/minikube-integration/19644-6577/.minikube/key.pem, removing ...
	I0914 23:41:30.557367    8967 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19644-6577/.minikube/key.pem
	I0914 23:41:30.557413    8967 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19644-6577/.minikube/key.pem (1679 bytes)
	I0914 23:41:30.557516    8967 exec_runner.go:144] found /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.pem, removing ...
	I0914 23:41:30.557520    8967 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.pem
	I0914 23:41:30.557564    8967 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.pem (1082 bytes)
	I0914 23:41:30.557653    8967 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-386000 san=[127.0.0.1 localhost minikube running-upgrade-386000]
	I0914 23:41:30.599354    8967 provision.go:177] copyRemoteCerts
	I0914 23:41:30.599402    8967 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 23:41:30.599411    8967 sshutil.go:53] new ssh client: &{IP:localhost Port:51266 SSHKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/running-upgrade-386000/id_rsa Username:docker}
	I0914 23:41:30.650108    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 23:41:30.666407    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 23:41:30.672910    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0914 23:41:30.686009    8967 provision.go:87] duration metric: took 128.994209ms to configureAuth
	I0914 23:41:30.686024    8967 buildroot.go:189] setting minikube options for container-runtime
	I0914 23:41:30.686158    8967 config.go:182] Loaded profile config "running-upgrade-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 23:41:30.686201    8967 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:30.686298    8967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104cb9190] 0x104cbb9d0 <nil>  [] 0s} localhost 51266 <nil> <nil>}
	I0914 23:41:30.686305    8967 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0914 23:41:30.794128    8967 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0914 23:41:30.794149    8967 buildroot.go:70] root file system type: tmpfs
	I0914 23:41:30.794218    8967 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0914 23:41:30.794289    8967 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:30.794425    8967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104cb9190] 0x104cbb9d0 <nil>  [] 0s} localhost 51266 <nil> <nil>}
	I0914 23:41:30.794464    8967 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0914 23:41:30.910822    8967 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0914 23:41:30.910889    8967 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:30.911007    8967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104cb9190] 0x104cbb9d0 <nil>  [] 0s} localhost 51266 <nil> <nil>}
	I0914 23:41:30.911015    8967 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0914 23:41:30.994446    8967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 23:41:30.994463    8967 machine.go:96] duration metric: took 663.950041ms to provisionDockerMachine
	I0914 23:41:30.994469    8967 start.go:293] postStartSetup for "running-upgrade-386000" (driver="qemu2")
	I0914 23:41:30.994477    8967 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 23:41:30.994562    8967 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 23:41:30.994574    8967 sshutil.go:53] new ssh client: &{IP:localhost Port:51266 SSHKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/running-upgrade-386000/id_rsa Username:docker}
	I0914 23:41:31.040345    8967 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 23:41:31.041718    8967 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 23:41:31.041725    8967 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19644-6577/.minikube/addons for local assets ...
	I0914 23:41:31.041805    8967 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19644-6577/.minikube/files for local assets ...
	I0914 23:41:31.041893    8967 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19644-6577/.minikube/files/etc/ssl/certs/70932.pem -> 70932.pem in /etc/ssl/certs
	I0914 23:41:31.041989    8967 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 23:41:31.044549    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/files/etc/ssl/certs/70932.pem --> /etc/ssl/certs/70932.pem (1708 bytes)
	I0914 23:41:31.052269    8967 start.go:296] duration metric: took 57.793708ms for postStartSetup
	I0914 23:41:31.052288    8967 fix.go:56] duration metric: took 734.334583ms for fixHost
	I0914 23:41:31.052345    8967 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:31.052467    8967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104cb9190] 0x104cbb9d0 <nil>  [] 0s} localhost 51266 <nil> <nil>}
	I0914 23:41:31.052473    8967 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 23:41:31.133363    8967 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726382491.145148723
	
	I0914 23:41:31.133375    8967 fix.go:216] guest clock: 1726382491.145148723
	I0914 23:41:31.133379    8967 fix.go:229] Guest: 2024-09-14 23:41:31.145148723 -0700 PDT Remote: 2024-09-14 23:41:31.05229 -0700 PDT m=+13.107698585 (delta=92.858723ms)
	I0914 23:41:31.133396    8967 fix.go:200] guest clock delta is within tolerance: 92.858723ms
	I0914 23:41:31.133399    8967 start.go:83] releasing machines lock for "running-upgrade-386000", held for 815.459083ms
	I0914 23:41:31.133476    8967 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 23:41:31.133478    8967 ssh_runner.go:195] Run: cat /version.json
	I0914 23:41:31.133497    8967 sshutil.go:53] new ssh client: &{IP:localhost Port:51266 SSHKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/running-upgrade-386000/id_rsa Username:docker}
	I0914 23:41:31.133502    8967 sshutil.go:53] new ssh client: &{IP:localhost Port:51266 SSHKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/running-upgrade-386000/id_rsa Username:docker}
	W0914 23:41:31.134132    8967 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:51488->127.0.0.1:51266: read: connection reset by peer
	I0914 23:41:31.134153    8967 retry.go:31] will retry after 151.454862ms: ssh: handshake failed: read tcp 127.0.0.1:51488->127.0.0.1:51266: read: connection reset by peer
	W0914 23:41:31.326944    8967 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0914 23:41:31.327024    8967 ssh_runner.go:195] Run: systemctl --version
	I0914 23:41:31.328866    8967 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 23:41:31.330424    8967 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 23:41:31.330454    8967 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0914 23:41:31.333123    8967 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0914 23:41:31.337377    8967 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 23:41:31.337386    8967 start.go:495] detecting cgroup driver to use...
	I0914 23:41:31.337461    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 23:41:31.342864    8967 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0914 23:41:31.346925    8967 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 23:41:31.350339    8967 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 23:41:31.350376    8967 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 23:41:31.353553    8967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 23:41:31.356572    8967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 23:41:31.359555    8967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 23:41:31.362980    8967 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 23:41:31.366291    8967 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 23:41:31.369397    8967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0914 23:41:31.372200    8967 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0914 23:41:31.375428    8967 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 23:41:31.378830    8967 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 23:41:31.381715    8967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:41:31.472533    8967 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 23:41:31.480325    8967 start.go:495] detecting cgroup driver to use...
	I0914 23:41:31.480394    8967 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0914 23:41:31.488632    8967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 23:41:31.497531    8967 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 23:41:31.505944    8967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 23:41:31.510954    8967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 23:41:31.515740    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 23:41:31.522415    8967 ssh_runner.go:195] Run: which cri-dockerd
	I0914 23:41:31.523954    8967 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0914 23:41:31.526531    8967 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0914 23:41:31.531618    8967 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0914 23:41:31.633049    8967 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0914 23:41:31.740831    8967 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0914 23:41:31.740885    8967 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0914 23:41:31.746042    8967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:41:31.851097    8967 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 23:41:32.313290    8956 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0914 23:41:32.313372    8956 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0914 23:41:32.314641    8956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 23:41:32.318098    8956 kubeadm.go:883] updating cluster {Name:stopped-upgrade-438000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51261 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-438000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0914 23:41:32.318147    8956 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0914 23:41:32.318198    8956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 23:41:32.328366    8956 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0914 23:41:32.328375    8956 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0914 23:41:32.328433    8956 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 23:41:32.332173    8956 ssh_runner.go:195] Run: which lz4
	I0914 23:41:32.333658    8956 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 23:41:32.334984    8956 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 23:41:32.334995    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0914 23:41:33.260560    8956 docker.go:649] duration metric: took 926.967042ms to copy over tarball
	I0914 23:41:33.260628    8956 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 23:41:34.416460    8956 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.155840166s)
	I0914 23:41:34.416473    8956 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 23:41:34.432310    8956 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 23:41:34.436073    8956 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0914 23:41:34.441451    8956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:41:34.501550    8956 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 23:41:35.661199    8956 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.159655s)
	I0914 23:41:35.661307    8956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 23:41:35.673180    8956 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0914 23:41:35.673188    8956 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0914 23:41:35.673193    8956 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 23:41:35.677715    8956 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:41:35.680088    8956 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0914 23:41:35.682796    8956 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0914 23:41:35.683232    8956 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:41:35.685417    8956 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0914 23:41:35.685627    8956 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 23:41:35.687730    8956 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0914 23:41:35.687953    8956 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0914 23:41:35.689553    8956 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0914 23:41:35.689636    8956 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 23:41:35.690891    8956 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0914 23:41:35.691008    8956 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0914 23:41:35.692471    8956 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 23:41:35.694538    8956 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0914 23:41:35.694629    8956 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0914 23:41:35.696235    8956 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 23:41:36.070596    8956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0914 23:41:36.082071    8956 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0914 23:41:36.082095    8956 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0914 23:41:36.082176    8956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0914 23:41:36.086507    8956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0914 23:41:36.094324    8956 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0914 23:41:36.094462    8956 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0914 23:41:36.104058    8956 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0914 23:41:36.104079    8956 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0914 23:41:36.104115    8956 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0914 23:41:36.104133    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0914 23:41:36.104146    8956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0914 23:41:36.106650    8956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 23:41:36.109606    8956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0914 23:41:36.123510    8956 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0914 23:41:36.123698    8956 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0914 23:41:36.125438    8956 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0914 23:41:36.125448    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0914 23:41:36.143841    8956 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0914 23:41:36.143864    8956 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 23:41:36.143887    8956 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0914 23:41:36.143900    8956 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0914 23:41:36.143941    8956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0914 23:41:36.143941    8956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 23:41:36.143992    8956 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0914 23:41:36.144043    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0914 23:41:36.144656    8956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0914 23:41:36.148665    8956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0914 23:41:36.178212    8956 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0914 23:41:36.182334    8956 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0914 23:41:36.182520    8956 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	W0914 23:41:36.183552    8956 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0914 23:41:36.183688    8956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0914 23:41:36.222828    8956 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0914 23:41:36.222850    8956 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0914 23:41:36.222861    8956 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0914 23:41:36.222860    8956 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0914 23:41:36.222948    8956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0914 23:41:36.222950    8956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0914 23:41:36.241649    8956 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0914 23:41:36.241674    8956 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 23:41:36.241740    8956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0914 23:41:36.267759    8956 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0914 23:41:36.268335    8956 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0914 23:41:36.307324    8956 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0914 23:41:36.307470    8956 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0914 23:41:36.320529    8956 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0914 23:41:36.320563    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0914 23:41:36.411600    8956 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0914 23:41:36.411626    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0914 23:41:36.474611    8956 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0914 23:41:36.474749    8956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:41:36.518376    8956 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0914 23:41:36.518468    8956 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0914 23:41:36.518495    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0914 23:41:36.525660    8956 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0914 23:41:36.525685    8956 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:41:36.525758    8956 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:41:36.681446    8956 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0914 23:41:36.681483    8956 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 23:41:36.681627    8956 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0914 23:41:36.683067    8956 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0914 23:41:36.683081    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0914 23:41:36.713715    8956 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 23:41:36.713728    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0914 23:41:36.971146    8956 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 23:41:36.971190    8956 cache_images.go:92] duration metric: took 1.298005625s to LoadCachedImages
	W0914 23:41:36.971243    8956 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0914 23:41:36.971255    8956 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0914 23:41:36.971318    8956 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-438000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-438000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 23:41:36.971399    8956 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0914 23:41:36.987760    8956 cni.go:84] Creating CNI manager for ""
	I0914 23:41:36.987772    8956 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:41:36.987777    8956 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 23:41:36.987788    8956 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-438000 NodeName:stopped-upgrade-438000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 23:41:36.987854    8956 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-438000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 23:41:36.987926    8956 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0914 23:41:36.991532    8956 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 23:41:36.991581    8956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 23:41:36.994565    8956 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0914 23:41:36.999649    8956 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 23:41:37.004976    8956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0914 23:41:37.011461    8956 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0914 23:41:37.013147    8956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 23:41:37.017058    8956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:41:37.084242    8956 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 23:41:37.089946    8956 certs.go:68] Setting up /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000 for IP: 10.0.2.15
	I0914 23:41:37.089986    8956 certs.go:194] generating shared ca certs ...
	I0914 23:41:37.089998    8956 certs.go:226] acquiring lock for ca certs: {Name:mkfb6b8e69b171081d1b5cff0d9e65dd76b6a9c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:41:37.090276    8956 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.key
	I0914 23:41:37.090335    8956 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/proxy-client-ca.key
	I0914 23:41:37.090344    8956 certs.go:256] generating profile certs ...
	I0914 23:41:37.090425    8956 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/client.key
	I0914 23:41:37.090439    8956 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/apiserver.key.85ef2f10
	I0914 23:41:37.090449    8956 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/apiserver.crt.85ef2f10 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0914 23:41:37.172424    8956 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/apiserver.crt.85ef2f10 ...
	I0914 23:41:37.172441    8956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/apiserver.crt.85ef2f10: {Name:mk21423c72c1ff74f64f5cd6e1e5865c0f9ee4cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:41:37.172722    8956 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/apiserver.key.85ef2f10 ...
	I0914 23:41:37.172728    8956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/apiserver.key.85ef2f10: {Name:mkb8833ac504d17eecb93561bd81ae06f7603029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:41:37.172861    8956 certs.go:381] copying /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/apiserver.crt.85ef2f10 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/apiserver.crt
	I0914 23:41:37.173002    8956 certs.go:385] copying /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/apiserver.key.85ef2f10 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/apiserver.key
	I0914 23:41:37.173167    8956 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/proxy-client.key
	I0914 23:41:37.173300    8956 certs.go:484] found cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/7093.pem (1338 bytes)
	W0914 23:41:37.173332    8956 certs.go:480] ignoring /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/7093_empty.pem, impossibly tiny 0 bytes
	I0914 23:41:37.173338    8956 certs.go:484] found cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca-key.pem (1675 bytes)
	I0914 23:41:37.173364    8956 certs.go:484] found cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem (1082 bytes)
	I0914 23:41:37.173389    8956 certs.go:484] found cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem (1123 bytes)
	I0914 23:41:37.173415    8956 certs.go:484] found cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/key.pem (1679 bytes)
	I0914 23:41:37.173466    8956 certs.go:484] found cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/files/etc/ssl/certs/70932.pem (1708 bytes)
	I0914 23:41:37.173922    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 23:41:37.180912    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 23:41:37.188275    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 23:41:37.196293    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 23:41:37.204512    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 23:41:37.212646    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 23:41:37.219636    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 23:41:37.226567    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 23:41:37.233530    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/files/etc/ssl/certs/70932.pem --> /usr/share/ca-certificates/70932.pem (1708 bytes)
	I0914 23:41:37.240618    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 23:41:37.247958    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/7093.pem --> /usr/share/ca-certificates/7093.pem (1338 bytes)
	I0914 23:41:37.255770    8956 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 23:41:37.261654    8956 ssh_runner.go:195] Run: openssl version
	I0914 23:41:37.264062    8956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 23:41:37.267462    8956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 23:41:37.268985    8956 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:40 /usr/share/ca-certificates/minikubeCA.pem
	I0914 23:41:37.269011    8956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 23:41:37.270828    8956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 23:41:37.274017    8956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7093.pem && ln -fs /usr/share/ca-certificates/7093.pem /etc/ssl/certs/7093.pem"
	I0914 23:41:37.276986    8956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7093.pem
	I0914 23:41:37.278247    8956 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 06:29 /usr/share/ca-certificates/7093.pem
	I0914 23:41:37.278272    8956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7093.pem
	I0914 23:41:37.279815    8956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7093.pem /etc/ssl/certs/51391683.0"
	I0914 23:41:37.282780    8956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70932.pem && ln -fs /usr/share/ca-certificates/70932.pem /etc/ssl/certs/70932.pem"
	I0914 23:41:37.285744    8956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70932.pem
	I0914 23:41:37.287213    8956 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 06:29 /usr/share/ca-certificates/70932.pem
	I0914 23:41:37.287239    8956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70932.pem
	I0914 23:41:37.289094    8956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/70932.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 23:41:37.292560    8956 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 23:41:37.294276    8956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 23:41:37.296316    8956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 23:41:37.298303    8956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 23:41:37.300789    8956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 23:41:37.302819    8956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 23:41:37.304477    8956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 23:41:37.306391    8956 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-438000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51261 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-438000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0914 23:41:37.306462    8956 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 23:41:37.316684    8956 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 23:41:37.319876    8956 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 23:41:37.319883    8956 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 23:41:37.319910    8956 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 23:41:37.322881    8956 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:41:37.322922    8956 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-438000" does not appear in /Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:41:37.322937    8956 kubeconfig.go:62] /Users/jenkins/minikube-integration/19644-6577/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-438000" cluster setting kubeconfig missing "stopped-upgrade-438000" context setting]
	I0914 23:41:37.323096    8956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/kubeconfig: {Name:mke334fd43bb51604954449e74caf7f81dee5b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:41:37.323745    8956 kapi.go:59] client config for stopped-upgrade-438000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/client.key", CAFile:"/Users/jenkins/minikube-integration/19644-6577/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104949800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 23:41:37.324761    8956 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 23:41:37.327723    8956 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-438000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0914 23:41:37.327740    8956 kubeadm.go:1160] stopping kube-system containers ...
	I0914 23:41:37.327791    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 23:41:37.338833    8956 docker.go:483] Stopping containers: [9405ac203f41 6f2907013b5d 87eeb9536e45 ba34e94c3930 9edbecfd3df2 1faf6553ac06 72775498364e d019fc00a42a]
	I0914 23:41:37.338911    8956 ssh_runner.go:195] Run: docker stop 9405ac203f41 6f2907013b5d 87eeb9536e45 ba34e94c3930 9edbecfd3df2 1faf6553ac06 72775498364e d019fc00a42a
	I0914 23:41:37.349220    8956 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 23:41:37.355142    8956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 23:41:37.357831    8956 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 23:41:37.357837    8956 kubeadm.go:157] found existing configuration files:
	
	I0914 23:41:37.357864    8956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/admin.conf
	I0914 23:41:37.360518    8956 kubeadm.go:163] "https://control-plane.minikube.internal:51261" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 23:41:37.360543    8956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 23:41:37.363583    8956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/kubelet.conf
	I0914 23:41:37.366167    8956 kubeadm.go:163] "https://control-plane.minikube.internal:51261" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 23:41:37.366204    8956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 23:41:37.368737    8956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/controller-manager.conf
	I0914 23:41:37.371766    8956 kubeadm.go:163] "https://control-plane.minikube.internal:51261" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 23:41:37.371791    8956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 23:41:37.374643    8956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/scheduler.conf
	I0914 23:41:37.377008    8956 kubeadm.go:163] "https://control-plane.minikube.internal:51261" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 23:41:37.377036    8956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 23:41:37.379720    8956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 23:41:37.382321    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:41:37.403484    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:41:37.934575    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:41:38.061946    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:41:38.085583    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:41:38.111815    8956 api_server.go:52] waiting for apiserver process to appear ...
	I0914 23:41:38.111906    8956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:41:38.614109    8956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:41:39.392611    8967 ssh_runner.go:235] Completed: sudo systemctl restart docker: (7.541639625s)
	I0914 23:41:39.392675    8967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0914 23:41:39.397824    8967 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0914 23:41:39.405632    8967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0914 23:41:39.410729    8967 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0914 23:41:39.496505    8967 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 23:41:39.584311    8967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:41:39.664118    8967 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0914 23:41:39.670362    8967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0914 23:41:39.675094    8967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:41:39.772089    8967 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0914 23:41:39.812725    8967 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0914 23:41:39.812816    8967 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0914 23:41:39.815441    8967 start.go:563] Will wait 60s for crictl version
	I0914 23:41:39.815498    8967 ssh_runner.go:195] Run: which crictl
	I0914 23:41:39.817441    8967 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 23:41:39.829576    8967 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0914 23:41:39.829654    8967 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 23:41:39.842366    8967 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 23:41:39.861646    8967 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0914 23:41:39.861731    8967 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0914 23:41:39.863096    8967 kubeadm.go:883] updating cluster {Name:running-upgrade-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51345 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-386000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0914 23:41:39.863142    8967 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0914 23:41:39.863191    8967 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 23:41:39.874438    8967 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0914 23:41:39.874447    8967 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0914 23:41:39.874508    8967 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 23:41:39.877857    8967 ssh_runner.go:195] Run: which lz4
	I0914 23:41:39.879376    8967 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 23:41:39.880846    8967 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 23:41:39.880861    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0914 23:41:40.865514    8967 docker.go:649] duration metric: took 986.197084ms to copy over tarball
	I0914 23:41:40.865586    8967 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 23:41:42.282135    8967 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.41655625s)
	I0914 23:41:42.282148    8967 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 23:41:42.297815    8967 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 23:41:42.300696    8967 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0914 23:41:42.305894    8967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:41:42.394564    8967 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 23:41:39.112710    8956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:41:39.116686    8956 api_server.go:72] duration metric: took 1.00489275s to wait for apiserver process to appear ...
	I0914 23:41:39.116696    8956 api_server.go:88] waiting for apiserver healthz status ...
	I0914 23:41:39.116706    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:41:43.626409    8967 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.231853208s)
	I0914 23:41:43.626532    8967 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 23:41:43.646960    8967 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0914 23:41:43.646969    8967 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0914 23:41:43.646975    8967 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 23:41:43.650702    8967 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0914 23:41:43.652277    8967 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:41:43.654427    8967 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0914 23:41:43.654519    8967 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0914 23:41:43.656442    8967 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:41:43.656590    8967 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0914 23:41:43.658091    8967 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0914 23:41:43.658248    8967 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 23:41:43.659602    8967 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0914 23:41:43.659695    8967 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0914 23:41:43.660605    8967 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 23:41:43.660709    8967 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 23:41:43.661604    8967 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0914 23:41:43.661655    8967 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0914 23:41:43.662310    8967 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 23:41:43.663000    8967 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0914 23:41:44.004530    8967 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0914 23:41:44.018013    8967 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0914 23:41:44.018040    8967 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0914 23:41:44.018102    8967 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0914 23:41:44.028764    8967 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0914 23:41:44.029826    8967 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0914 23:41:44.029921    8967 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0914 23:41:44.040587    8967 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0914 23:41:44.040610    8967 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0914 23:41:44.040614    8967 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0914 23:41:44.040641    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0914 23:41:44.040670    8967 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0914 23:41:44.042737    8967 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0914 23:41:44.053230    8967 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0914 23:41:44.053243    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0914 23:41:44.058442    8967 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0914 23:41:44.063654    8967 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0914 23:41:44.063810    8967 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0914 23:41:44.067965    8967 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0914 23:41:44.067987    8967 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0914 23:41:44.068061    8967 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0914 23:41:44.095233    8967 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0914 23:41:44.095591    8967 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0914 23:41:44.095609    8967 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 23:41:44.095645    8967 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0914 23:41:44.095828    8967 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0914 23:41:44.106420    8967 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 23:41:44.110024    8967 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0914 23:41:44.110088    8967 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0914 23:41:44.110107    8967 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0914 23:41:44.110126    8967 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0914 23:41:44.110138    8967 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0914 23:41:44.110713    8967 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0914 23:41:44.110779    8967 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0914 23:41:44.122424    8967 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0914 23:41:44.122447    8967 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 23:41:44.122511    8967 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 23:41:44.125095    8967 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0914 23:41:44.125124    8967 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0914 23:41:44.125140    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0914 23:41:44.125173    8967 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0914 23:41:44.125183    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0914 23:41:44.146979    8967 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0914 23:41:44.150875    8967 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0914 23:41:44.190675    8967 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0914 23:41:44.190700    8967 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0914 23:41:44.190776    8967 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0914 23:41:44.221036    8967 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0914 23:41:44.221061    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0914 23:41:44.233924    8967 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0914 23:41:44.316400    8967 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0914 23:41:44.436334    8967 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0914 23:41:44.436349    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0914 23:41:44.573033    8967 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0914 23:41:44.573152    8967 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:41:44.578210    8967 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0914 23:41:44.584162    8967 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0914 23:41:44.584182    8967 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:41:44.584256    8967 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:41:44.594931    8967 cache_images.go:92] duration metric: took 947.966ms to LoadCachedImages
	W0914 23:41:44.594974    8967 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0914 23:41:44.594987    8967 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0914 23:41:44.595037    8967 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-386000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-386000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 23:41:44.595115    8967 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0914 23:41:44.616700    8967 cni.go:84] Creating CNI manager for ""
	I0914 23:41:44.616711    8967 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:41:44.616716    8967 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 23:41:44.616725    8967 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-386000 NodeName:running-upgrade-386000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 23:41:44.616794    8967 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-386000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 23:41:44.616861    8967 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0914 23:41:44.620556    8967 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 23:41:44.620594    8967 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 23:41:44.623378    8967 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0914 23:41:44.629197    8967 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 23:41:44.634898    8967 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0914 23:41:44.640785    8967 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0914 23:41:44.642456    8967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:41:44.728724    8967 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 23:41:44.733967    8967 certs.go:68] Setting up /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000 for IP: 10.0.2.15
	I0914 23:41:44.733974    8967 certs.go:194] generating shared ca certs ...
	I0914 23:41:44.733982    8967 certs.go:226] acquiring lock for ca certs: {Name:mkfb6b8e69b171081d1b5cff0d9e65dd76b6a9c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:41:44.734127    8967 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.key
	I0914 23:41:44.734160    8967 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/proxy-client-ca.key
	I0914 23:41:44.734169    8967 certs.go:256] generating profile certs ...
	I0914 23:41:44.734242    8967 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/client.key
	I0914 23:41:44.734264    8967 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/apiserver.key.89f5340c
	I0914 23:41:44.734275    8967 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/apiserver.crt.89f5340c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0914 23:41:44.868615    8967 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/apiserver.crt.89f5340c ...
	I0914 23:41:44.868625    8967 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/apiserver.crt.89f5340c: {Name:mkd0124a77422e53adfb1ec4736c793193ce0844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:41:44.868927    8967 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/apiserver.key.89f5340c ...
	I0914 23:41:44.868934    8967 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/apiserver.key.89f5340c: {Name:mk182419fee4e7490848bfa85ed65c73f6d45bd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:41:44.869072    8967 certs.go:381] copying /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/apiserver.crt.89f5340c -> /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/apiserver.crt
	I0914 23:41:44.870074    8967 certs.go:385] copying /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/apiserver.key.89f5340c -> /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/apiserver.key
	I0914 23:41:44.870263    8967 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/proxy-client.key
	I0914 23:41:44.870398    8967 certs.go:484] found cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/7093.pem (1338 bytes)
	W0914 23:41:44.870425    8967 certs.go:480] ignoring /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/7093_empty.pem, impossibly tiny 0 bytes
	I0914 23:41:44.870430    8967 certs.go:484] found cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca-key.pem (1675 bytes)
	I0914 23:41:44.870450    8967 certs.go:484] found cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem (1082 bytes)
	I0914 23:41:44.870472    8967 certs.go:484] found cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem (1123 bytes)
	I0914 23:41:44.870491    8967 certs.go:484] found cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/key.pem (1679 bytes)
	I0914 23:41:44.870532    8967 certs.go:484] found cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/files/etc/ssl/certs/70932.pem (1708 bytes)
	I0914 23:41:44.870869    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 23:41:44.878606    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 23:41:44.886349    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 23:41:44.893160    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 23:41:44.900054    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 23:41:44.906742    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 23:41:44.913994    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 23:41:44.921828    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 23:41:44.929078    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/7093.pem --> /usr/share/ca-certificates/7093.pem (1338 bytes)
	I0914 23:41:44.935928    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/files/etc/ssl/certs/70932.pem --> /usr/share/ca-certificates/70932.pem (1708 bytes)
	I0914 23:41:44.942523    8967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 23:41:44.949797    8967 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 23:41:44.955028    8967 ssh_runner.go:195] Run: openssl version
	I0914 23:41:44.956954    8967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7093.pem && ln -fs /usr/share/ca-certificates/7093.pem /etc/ssl/certs/7093.pem"
	I0914 23:41:44.960062    8967 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7093.pem
	I0914 23:41:44.961471    8967 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 06:29 /usr/share/ca-certificates/7093.pem
	I0914 23:41:44.961506    8967 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7093.pem
	I0914 23:41:44.963323    8967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7093.pem /etc/ssl/certs/51391683.0"
	I0914 23:41:44.966496    8967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70932.pem && ln -fs /usr/share/ca-certificates/70932.pem /etc/ssl/certs/70932.pem"
	I0914 23:41:44.970063    8967 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70932.pem
	I0914 23:41:44.971601    8967 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 06:29 /usr/share/ca-certificates/70932.pem
	I0914 23:41:44.971627    8967 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70932.pem
	I0914 23:41:44.973403    8967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/70932.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 23:41:44.976276    8967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 23:41:44.979220    8967 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 23:41:44.980830    8967 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:40 /usr/share/ca-certificates/minikubeCA.pem
	I0914 23:41:44.980854    8967 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 23:41:44.982799    8967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 23:41:44.985917    8967 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 23:41:44.987499    8967 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 23:41:44.989338    8967 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 23:41:44.991308    8967 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 23:41:44.993459    8967 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 23:41:44.996205    8967 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 23:41:44.997917    8967 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 23:41:44.999943    8967 kubeadm.go:392] StartCluster: {Name:running-upgrade-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51345 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-386000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0914 23:41:45.000033    8967 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 23:41:45.010536    8967 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 23:41:45.014670    8967 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 23:41:45.014679    8967 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 23:41:45.014708    8967 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 23:41:45.018173    8967 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:41:45.018460    8967 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-386000" does not appear in /Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:41:45.018551    8967 kubeconfig.go:62] /Users/jenkins/minikube-integration/19644-6577/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-386000" cluster setting kubeconfig missing "running-upgrade-386000" context setting]
	I0914 23:41:45.018723    8967 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/kubeconfig: {Name:mke334fd43bb51604954449e74caf7f81dee5b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:41:45.019156    8967 kapi.go:59] client config for running-upgrade-386000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/client.key", CAFile:"/Users/jenkins/minikube-integration/19644-6577/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106291800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 23:41:45.019508    8967 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 23:41:45.022388    8967 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-386000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0914 23:41:45.022398    8967 kubeadm.go:1160] stopping kube-system containers ...
	I0914 23:41:45.022448    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 23:41:45.033849    8967 docker.go:483] Stopping containers: [6287f0754e12 46673287e658 ecb52bb838c2 8b51eca867bc ff64f5b5c01a d971e5f7858d 810d336c4764 cdbc600fbbb4 0469132abd7c e723f75a5293 066727c3a39e 05233d01ab13 53d75f5be566 8e2b4c6925a4 b999b318bbc3 ed8d75c41830 e456f04c65a9 fa5f013636cd 1d8885933eb9 910604735e4b 2fa63c68886d]
	I0914 23:41:45.033925    8967 ssh_runner.go:195] Run: docker stop 6287f0754e12 46673287e658 ecb52bb838c2 8b51eca867bc ff64f5b5c01a d971e5f7858d 810d336c4764 cdbc600fbbb4 0469132abd7c e723f75a5293 066727c3a39e 05233d01ab13 53d75f5be566 8e2b4c6925a4 b999b318bbc3 ed8d75c41830 e456f04c65a9 fa5f013636cd 1d8885933eb9 910604735e4b 2fa63c68886d
	I0914 23:41:45.045687    8967 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 23:41:45.154228    8967 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 23:41:45.159994    8967 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Sep 15 06:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Sep 15 06:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 15 06:41 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Sep 15 06:41 /etc/kubernetes/scheduler.conf
	
	I0914 23:41:45.160044    8967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/admin.conf
	I0914 23:41:45.163532    8967 kubeadm.go:163] "https://control-plane.minikube.internal:51345" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:41:45.163561    8967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 23:41:45.167031    8967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/kubelet.conf
	I0914 23:41:45.170507    8967 kubeadm.go:163] "https://control-plane.minikube.internal:51345" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:41:45.170541    8967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 23:41:45.174064    8967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/controller-manager.conf
	I0914 23:41:45.177373    8967 kubeadm.go:163] "https://control-plane.minikube.internal:51345" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:41:45.177404    8967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 23:41:45.180610    8967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/scheduler.conf
	I0914 23:41:45.183721    8967 kubeadm.go:163] "https://control-plane.minikube.internal:51345" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:41:45.183751    8967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 23:41:45.186406    8967 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 23:41:45.189335    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:41:45.223431    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:41:45.707615    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:41:45.943459    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:41:45.970294    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:41:45.993106    8967 api_server.go:52] waiting for apiserver process to appear ...
	I0914 23:41:45.993193    8967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:41:46.495563    8967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:41:46.995554    8967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:41:47.495573    8967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:41:44.118310    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:41:44.118342    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:41:47.995517    8967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:41:48.493930    8967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:41:48.995251    8967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:41:49.495266    8967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:41:49.499615    8967 api_server.go:72] duration metric: took 3.506577167s to wait for apiserver process to appear ...
	I0914 23:41:49.499623    8967 api_server.go:88] waiting for apiserver healthz status ...
	I0914 23:41:49.499633    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:41:49.118645    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:41:49.118677    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:41:54.501620    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:41:54.501643    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:41:54.118847    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:41:54.118904    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:41:59.501767    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:41:59.501806    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:41:59.119314    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:41:59.119371    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:04.502017    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:04.502051    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:04.120246    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:04.120335    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:09.502439    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:09.502552    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:09.121305    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:09.121358    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:14.503575    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:14.503669    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:14.122959    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:14.123009    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:19.504977    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:19.504998    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:19.124593    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:19.124674    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:24.506060    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:24.506085    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:24.126882    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:24.126905    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:29.507468    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:29.507507    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:29.129000    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:29.129043    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:34.509353    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:34.509374    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:34.131199    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:34.131226    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:39.511444    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:39.511460    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:39.133091    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:39.133560    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:42:39.167558    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:42:39.167709    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:42:39.187553    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:42:39.187663    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:42:39.202428    8956 logs.go:276] 0 containers: []
	W0914 23:42:39.202439    8956 logs.go:278] No container was found matching "coredns"
	I0914 23:42:39.202506    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:42:39.214852    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:42:39.214939    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:42:39.225452    8956 logs.go:276] 0 containers: []
	W0914 23:42:39.225465    8956 logs.go:278] No container was found matching "kube-proxy"
	I0914 23:42:39.225532    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:42:39.236733    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:42:39.236807    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:42:39.246778    8956 logs.go:276] 0 containers: []
	W0914 23:42:39.246790    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:42:39.246859    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:42:39.256610    8956 logs.go:276] 0 containers: []
	W0914 23:42:39.256625    8956 logs.go:278] No container was found matching "storage-provisioner"
	I0914 23:42:39.256629    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:42:39.256635    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:42:39.268036    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:42:39.268047    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:42:39.378479    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:42:39.378490    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:42:39.392893    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:42:39.392903    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:42:39.407568    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:42:39.407578    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:42:39.423702    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:42:39.423713    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:42:39.446737    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:42:39.446749    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:42:39.465107    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:42:39.465116    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:42:39.487880    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:42:39.487890    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:42:39.514955    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:42:39.514964    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:42:39.519256    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:42:39.519263    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:42:39.536340    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:42:39.536350    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:42:39.553396    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:42:39.553405    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:42:42.078537    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:44.513568    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:44.513587    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:47.081132    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:47.081416    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:42:47.111559    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:42:47.111693    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:42:47.129602    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:42:47.129706    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:42:47.142712    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:42:47.142807    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:42:47.154237    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:42:47.154318    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:42:47.164583    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:42:47.164669    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:42:47.177313    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:42:47.177394    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:42:47.187255    8956 logs.go:276] 0 containers: []
	W0914 23:42:47.187269    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:42:47.187337    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:42:47.202106    8956 logs.go:276] 1 containers: [bbe9ac8055ea]
	I0914 23:42:47.202125    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:42:47.202130    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:42:47.215886    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:42:47.215896    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:42:47.230745    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:42:47.230756    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:42:47.247792    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:42:47.247803    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:42:47.274647    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:42:47.274654    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:42:47.288955    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:42:47.288965    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:42:47.302203    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:42:47.302213    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:42:47.314051    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:42:47.314061    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:42:47.318758    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:42:47.318764    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:42:47.331150    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:42:47.331160    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:42:47.354088    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:42:47.354099    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:42:47.371709    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:42:47.371718    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:42:47.410244    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:42:47.410265    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:42:47.422220    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:42:47.422236    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:42:47.440105    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:42:47.440120    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:42:47.451581    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:42:47.451591    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:42:49.515793    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:49.516208    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:42:49.550896    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:42:49.551056    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:42:49.569456    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:42:49.569558    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:42:49.584450    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:42:49.584546    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:42:49.596485    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:42:49.596573    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:42:49.607230    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:42:49.607304    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:42:49.617705    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:42:49.617788    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:42:49.628373    8967 logs.go:276] 0 containers: []
	W0914 23:42:49.628384    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:42:49.628453    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:42:49.641885    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:42:49.641903    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:42:49.641909    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:42:49.646926    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:42:49.646934    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:42:49.658337    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:42:49.658353    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:42:49.670317    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:42:49.670328    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:42:49.697354    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:42:49.697365    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:42:49.800811    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:42:49.800822    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:42:49.814691    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:42:49.814701    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:42:49.829324    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:42:49.829335    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:42:49.844631    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:42:49.844644    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:42:49.856328    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:42:49.856337    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:42:49.883852    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:42:49.883864    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:42:49.906748    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:42:49.906758    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:42:49.918576    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:42:49.918588    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:42:49.931216    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:42:49.931231    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:42:49.950864    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:42:49.950876    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:42:49.995739    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:42:49.995747    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:42:50.013062    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:42:50.013074    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:42:50.029154    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:42:50.029164    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:42:50.042105    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:42:50.042116    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:42:52.556025    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:49.979273    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:57.558239    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:57.558766    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:42:57.600006    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:42:57.600133    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:42:57.616197    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:42:57.616298    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:42:57.629653    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:42:57.629776    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:42:57.642174    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:42:57.642248    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:42:57.653491    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:42:57.653568    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:42:57.664448    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:42:57.664531    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:42:57.674937    8967 logs.go:276] 0 containers: []
	W0914 23:42:57.674950    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:42:57.675025    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:42:57.685612    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:42:57.685628    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:42:57.685633    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:42:57.690519    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:42:57.690527    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:42:57.715030    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:42:57.715040    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:42:57.729258    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:42:57.729269    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:42:57.740858    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:42:57.740870    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:42:57.755573    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:42:57.755583    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:42:57.771069    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:42:57.771079    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:42:57.808091    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:42:57.808101    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:42:57.822519    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:42:57.822530    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:42:57.834264    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:42:57.834275    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:42:57.845917    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:42:57.845928    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:42:57.860797    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:42:57.860807    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:42:57.878136    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:42:57.878147    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:42:57.889840    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:42:57.889852    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:42:57.932227    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:42:57.932235    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:42:57.948135    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:42:57.948145    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:42:54.981427    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:54.981688    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:42:55.003796    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:42:55.003926    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:42:55.018873    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:42:55.018964    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:42:55.035618    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:42:55.035707    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:42:55.050739    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:42:55.050822    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:42:55.063894    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:42:55.063975    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:42:55.074532    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:42:55.074616    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:42:55.085114    8956 logs.go:276] 0 containers: []
	W0914 23:42:55.085124    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:42:55.085195    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:42:55.095806    8956 logs.go:276] 1 containers: [bbe9ac8055ea]
	I0914 23:42:55.095828    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:42:55.095834    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:42:55.110384    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:42:55.110394    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:42:55.128582    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:42:55.128592    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:42:55.139987    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:42:55.140000    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:42:55.168163    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:42:55.168170    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:42:55.181402    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:42:55.181412    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:42:55.192637    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:42:55.192647    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:42:55.210339    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:42:55.210350    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:42:55.248194    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:42:55.248204    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:42:55.262086    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:42:55.262096    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:42:55.277591    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:42:55.277602    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:42:55.302463    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:42:55.302470    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:42:55.314357    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:42:55.314368    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:42:55.332662    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:42:55.332671    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:42:55.355280    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:42:55.355296    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:42:55.359525    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:42:55.359533    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:42:57.882053    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:57.966379    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:42:57.966390    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:42:57.978394    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:42:57.978405    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:42:58.003648    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:42:58.003666    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:00.518126    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:02.884173    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:02.884427    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:02.904666    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:43:02.904782    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:02.919731    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:43:02.919825    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:02.931314    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:43:02.931388    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:02.942255    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:43:02.942368    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:02.952732    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:43:02.952816    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:02.968153    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:43:02.968247    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:02.980254    8956 logs.go:276] 0 containers: []
	W0914 23:43:02.980265    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:02.980336    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:02.991045    8956 logs.go:276] 1 containers: [bbe9ac8055ea]
	I0914 23:43:02.991061    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:02.991067    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:03.017822    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:03.017830    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:03.021747    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:43:03.021755    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:43:03.045025    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:43:03.045035    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:43:03.060849    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:03.060859    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:03.086776    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:43:03.086783    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:43:03.100105    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:43:03.100115    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:43:03.116090    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:43:03.116104    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:43:03.127756    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:03.127766    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:03.175214    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:43:03.175225    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:43:03.194013    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:43:03.194025    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:43:03.213623    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:43:03.213632    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:43:03.228227    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:43:03.228238    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:43:03.241246    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:43:03.241257    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:43:03.252792    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:43:03.252802    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:43:03.263671    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:43:03.263681    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:05.520346    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:05.520610    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:05.543568    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:43:05.543714    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:05.558302    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:43:05.558403    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:05.570820    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:43:05.570905    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:05.581395    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:43:05.581473    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:05.592538    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:43:05.592615    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:05.603120    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:43:05.603204    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:05.613317    8967 logs.go:276] 0 containers: []
	W0914 23:43:05.613331    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:05.613403    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:05.625380    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:43:05.625396    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:43:05.625402    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:43:05.639583    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:43:05.639596    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:43:05.654378    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:43:05.654390    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:43:05.665972    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:43:05.665983    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:43:05.687745    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:05.687755    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:05.731196    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:43:05.731204    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:43:05.745450    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:43:05.745459    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:43:05.764989    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:43:05.764998    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:43:05.790043    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:43:05.790050    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:43:05.802557    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:43:05.802572    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:43:05.820351    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:43:05.820365    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:43:05.835273    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:05.835284    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:05.863162    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:05.863169    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:05.899784    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:43:05.899795    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:43:05.911119    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:43:05.911130    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:43:05.922886    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:43:05.922896    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:05.935850    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:05.935862    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:05.940582    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:43:05.940589    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:43:05.954522    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:43:05.954534    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:43:05.776707    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:08.468489    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:10.777258    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:10.777425    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:10.790496    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:43:10.790585    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:10.801889    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:43:10.801974    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:10.812149    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:43:10.812235    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:10.822613    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:43:10.822703    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:10.833638    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:43:10.833719    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:10.844135    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:43:10.844224    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:10.854263    8956 logs.go:276] 0 containers: []
	W0914 23:43:10.854274    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:10.854340    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:10.864825    8956 logs.go:276] 1 containers: [bbe9ac8055ea]
	I0914 23:43:10.864843    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:43:10.864849    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:43:10.882882    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:10.882896    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:10.909457    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:43:10.909465    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:43:10.928349    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:10.928363    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:10.932557    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:10.932563    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:10.968026    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:43:10.968037    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:43:10.980572    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:43:10.980583    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:43:10.991588    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:43:10.991599    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:43:11.007515    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:43:11.007527    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:43:11.018647    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:43:11.018659    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:11.036515    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:11.036527    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:11.064380    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:43:11.064387    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:43:11.078176    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:43:11.078190    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:43:11.092567    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:43:11.092582    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:43:11.119506    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:43:11.119523    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:43:11.131819    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:43:11.131831    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:43:13.647737    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:13.470213    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:13.470386    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:13.483433    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:43:13.483524    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:13.494811    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:43:13.494899    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:13.505640    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:43:13.505731    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:13.516492    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:43:13.516568    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:13.527420    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:43:13.527512    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:13.538464    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:43:13.538547    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:13.549256    8967 logs.go:276] 0 containers: []
	W0914 23:43:13.549268    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:13.549337    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:13.559896    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:43:13.559912    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:43:13.559918    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:43:13.584853    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:43:13.584862    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:43:13.596639    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:43:13.596651    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:43:13.608189    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:43:13.608202    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:43:13.624786    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:43:13.624797    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:43:13.636226    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:43:13.636237    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:43:13.650108    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:43:13.650116    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:43:13.667447    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:13.667457    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:13.710595    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:43:13.710606    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:43:13.725386    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:43:13.725395    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:43:13.737239    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:43:13.737250    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:43:13.749647    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:13.749662    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:13.754680    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:13.754687    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:13.789651    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:43:13.789664    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:43:13.804351    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:43:13.804363    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:43:13.815536    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:43:13.815548    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:43:13.834724    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:43:13.834733    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:43:13.852695    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:13.852705    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:13.879687    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:43:13.879695    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:16.394398    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:18.649831    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:18.650033    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:18.668930    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:43:18.669037    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:18.685728    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:43:18.685817    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:18.697748    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:43:18.697831    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:18.708553    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:43:18.708637    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:18.719478    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:43:18.719552    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:18.730468    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:43:18.730563    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:18.742128    8956 logs.go:276] 0 containers: []
	W0914 23:43:18.742141    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:18.742216    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:18.753279    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:43:18.753296    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:18.753301    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:21.396590    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:21.396722    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:21.407997    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:43:21.408086    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:21.419230    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:43:21.419327    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:21.430163    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:43:21.430250    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:21.441193    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:43:21.441266    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:21.451640    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:43:21.451725    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:21.462512    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:43:21.462592    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:21.472991    8967 logs.go:276] 0 containers: []
	W0914 23:43:21.473009    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:21.473076    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:21.483636    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:43:21.483655    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:21.483661    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:21.488346    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:43:21.488353    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:43:21.502022    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:43:21.502034    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:43:21.516500    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:21.516510    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:21.543160    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:43:21.543168    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:43:21.554591    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:43:21.554602    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:43:21.566242    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:43:21.566252    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:43:21.584914    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:43:21.584925    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:21.597270    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:21.597282    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:21.641138    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:43:21.641147    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:43:21.652490    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:43:21.652502    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:43:21.664562    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:43:21.664572    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:43:21.676150    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:21.676160    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:21.717028    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:43:21.717039    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:43:21.742226    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:43:21.742237    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:43:21.758466    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:43:21.758478    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:43:21.775455    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:43:21.775469    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:43:21.786631    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:43:21.786643    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:43:21.804187    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:43:21.804197    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:43:18.784769    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:18.784779    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:18.789198    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:43:18.789204    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:43:18.805271    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:43:18.805282    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:43:18.820230    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:43:18.820248    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:43:18.848819    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:43:18.848832    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:43:18.871170    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:43:18.871180    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:43:18.883266    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:43:18.883281    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:43:18.895079    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:43:18.895092    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:43:18.909028    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:43:18.909038    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:43:18.927275    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:43:18.927284    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:18.939114    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:43:18.939125    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:43:18.954670    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:18.954680    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:18.998858    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:43:18.998868    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:43:19.014073    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:43:19.014083    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:43:19.027027    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:43:19.027040    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:43:19.039297    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:19.039306    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:21.566600    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:24.324365    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:26.567995    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:26.568206    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:26.584525    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:43:26.584629    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:26.596787    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:43:26.596874    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:26.611680    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:43:26.611759    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:26.622026    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:43:26.622105    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:26.632617    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:43:26.632709    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:26.643101    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:43:26.643170    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:26.653742    8956 logs.go:276] 0 containers: []
	W0914 23:43:26.653757    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:26.653829    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:26.667915    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:43:26.667933    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:26.667939    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:26.696231    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:26.696242    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:26.700638    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:26.700644    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:26.736934    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:43:26.736946    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:43:26.772785    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:43:26.772802    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:43:26.784970    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:26.784982    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:26.809647    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:43:26.809659    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:43:26.823260    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:43:26.823271    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:43:26.837359    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:43:26.837373    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:43:26.848563    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:43:26.848575    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:43:26.864094    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:43:26.864102    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:43:26.881423    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:43:26.881433    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:43:26.892830    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:43:26.892840    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:26.904775    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:43:26.904784    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:43:26.917804    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:43:26.917815    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:43:26.932559    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:43:26.932569    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:43:26.944075    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:43:26.944084    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:43:29.326823    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:29.327284    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:29.360497    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:43:29.360660    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:29.380178    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:43:29.380303    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:29.398628    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:43:29.398715    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:29.410275    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:43:29.410361    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:29.421707    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:43:29.421788    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:29.432488    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:43:29.432576    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:29.443382    8967 logs.go:276] 0 containers: []
	W0914 23:43:29.443397    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:29.443475    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:29.457003    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:43:29.457022    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:43:29.457028    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:43:29.469744    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:43:29.469757    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:29.483029    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:43:29.483041    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:43:29.497231    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:43:29.497241    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:43:29.523117    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:43:29.523129    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:43:29.534792    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:43:29.534801    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:43:29.552721    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:43:29.552732    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:43:29.570078    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:43:29.570088    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:43:29.582075    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:29.582087    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:29.609382    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:29.609393    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:29.653342    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:29.653350    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:29.658215    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:29.658223    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:29.693910    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:43:29.693921    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:43:29.705529    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:43:29.705541    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:43:29.718730    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:43:29.718742    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:43:29.730880    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:43:29.730891    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:43:29.748641    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:43:29.748651    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:43:29.760284    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:43:29.760294    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:43:29.774099    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:43:29.774111    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:43:32.297751    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:29.462997    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:37.300061    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:37.300390    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:37.335075    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:43:37.335216    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:37.356811    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:43:37.356901    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:37.369667    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:43:37.369732    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:37.388118    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:43:37.388205    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:37.399629    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:43:37.399711    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:37.410620    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:43:37.410701    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:37.421085    8967 logs.go:276] 0 containers: []
	W0914 23:43:37.421097    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:37.421169    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:37.431406    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:43:37.431422    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:43:37.431428    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:43:37.457091    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:43:37.457101    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:43:37.471254    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:43:37.471264    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:43:37.482221    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:43:37.482233    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:43:37.494397    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:37.494407    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:37.499314    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:43:37.499321    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:43:37.513331    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:43:37.513341    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:43:37.533826    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:43:37.533836    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:43:37.545188    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:43:37.545203    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:37.558704    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:43:37.558718    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:43:37.569885    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:43:37.569896    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:43:37.591853    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:37.591868    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:37.634060    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:43:37.634070    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:43:37.645542    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:43:37.645555    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:43:37.658660    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:43:37.658672    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:43:37.679939    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:43:37.679949    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:43:37.697156    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:37.697166    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:37.724096    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:37.724126    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:37.761027    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:43:37.761041    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:43:34.465134    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:34.465392    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:34.488886    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:43:34.488997    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:34.505912    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:43:34.506007    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:34.519383    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:43:34.519467    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:34.531036    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:43:34.531121    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:34.541522    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:43:34.541596    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:34.551915    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:43:34.551984    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:34.562002    8956 logs.go:276] 0 containers: []
	W0914 23:43:34.562021    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:34.562115    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:34.572503    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:43:34.572522    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:43:34.572528    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:43:34.585998    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:43:34.586009    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:34.597892    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:43:34.597903    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:43:34.609675    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:34.609686    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:34.614448    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:43:34.614457    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:43:34.626785    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:43:34.626795    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:43:34.638460    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:43:34.638470    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:43:34.652615    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:43:34.652625    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:43:34.679656    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:43:34.679670    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:43:34.695405    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:43:34.695415    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:43:34.715661    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:34.715673    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:34.744996    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:43:34.745009    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:43:34.760428    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:43:34.760441    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:43:34.774966    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:43:34.774978    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:43:34.786743    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:34.786754    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:34.822525    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:43:34.822538    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:43:34.840615    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:34.840627    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:37.368167    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:40.277572    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:42.368973    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:42.369164    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:42.384273    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:43:42.384376    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:42.395998    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:43:42.396078    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:42.407208    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:43:42.407293    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:42.419184    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:43:42.419271    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:42.429126    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:43:42.429201    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:42.439811    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:43:42.439882    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:42.449997    8956 logs.go:276] 0 containers: []
	W0914 23:43:42.450008    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:42.450068    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:42.460235    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:43:42.460257    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:43:42.460264    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:43:42.474532    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:43:42.474544    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:43:42.486519    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:43:42.486529    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:43:42.510730    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:43:42.510742    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:43:42.525569    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:42.525581    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:42.551501    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:42.551508    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:42.578395    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:42.578401    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:42.612230    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:43:42.612244    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:43:42.626073    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:43:42.626082    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:43:42.637203    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:43:42.637214    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:43:42.659355    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:43:42.659366    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:43:42.670817    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:43:42.670828    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:43:42.689341    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:43:42.689350    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:43:42.707803    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:43:42.707820    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:42.720469    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:42.720483    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:42.725043    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:43:42.725049    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:43:42.738123    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:43:42.738133    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:43:45.280094    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:45.280241    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:45.297865    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:43:45.297965    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:45.311298    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:43:45.311381    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:45.322830    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:43:45.322919    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:45.333676    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:43:45.333758    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:45.344741    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:43:45.344825    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:45.355618    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:43:45.355708    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:45.366049    8967 logs.go:276] 0 containers: []
	W0914 23:43:45.366061    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:45.366132    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:45.376845    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:43:45.376860    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:43:45.376866    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:43:45.389466    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:45.389482    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:45.414589    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:43:45.414597    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:45.426599    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:43:45.426611    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:43:45.440999    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:43:45.441013    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:43:45.459666    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:43:45.459681    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:43:45.472491    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:43:45.472503    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:43:45.490435    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:43:45.490446    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:43:45.501894    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:43:45.501906    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:43:45.513240    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:43:45.513251    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:43:45.524591    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:45.524602    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:45.560711    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:43:45.560721    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:43:45.572892    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:43:45.572901    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:43:45.584513    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:43:45.584524    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:43:45.600143    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:43:45.600156    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:43:45.617295    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:43:45.617309    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:43:45.636474    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:45.636485    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:45.677051    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:45.677060    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:45.681624    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:43:45.681631    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:43:45.253843    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:48.215214    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:50.256088    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:50.256292    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:50.269724    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:43:50.269817    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:50.281335    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:43:50.281416    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:50.291737    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:43:50.291820    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:50.302508    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:43:50.302598    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:50.314425    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:43:50.314507    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:50.324912    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:43:50.324992    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:50.334925    8956 logs.go:276] 0 containers: []
	W0914 23:43:50.334936    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:50.335009    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:50.349103    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:43:50.349122    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:43:50.349127    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:43:50.363206    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:43:50.363218    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:43:50.380342    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:50.380352    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:50.384929    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:43:50.384938    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:43:50.398830    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:43:50.398842    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:43:50.410078    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:43:50.410090    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:43:50.429531    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:43:50.429541    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:43:50.446795    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:43:50.446804    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:43:50.458760    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:50.458773    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:50.484525    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:50.484533    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:50.513126    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:50.513134    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:50.553822    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:43:50.553835    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:43:50.567412    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:43:50.567422    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:43:50.586825    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:43:50.586834    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:43:50.598727    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:43:50.598743    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:50.610619    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:43:50.610629    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:43:50.633499    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:43:50.633510    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:43:53.151629    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:53.217852    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:53.218058    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:53.239031    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:43:53.239145    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:53.258323    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:43:53.258416    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:53.269985    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:43:53.270070    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:53.280633    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:43:53.280704    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:53.291232    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:43:53.291314    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:53.302034    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:43:53.302123    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:53.312470    8967 logs.go:276] 0 containers: []
	W0914 23:43:53.312483    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:53.312559    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:53.322638    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:43:53.322654    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:43:53.322659    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:43:53.334683    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:43:53.334693    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:43:53.351812    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:43:53.351821    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:43:53.362977    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:53.362990    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:53.406936    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:43:53.406947    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:43:53.420992    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:43:53.421002    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:43:53.432113    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:43:53.432126    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:53.444213    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:53.444223    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:53.485885    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:53.485896    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:53.490972    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:53.490980    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:53.516691    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:43:53.516702    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:43:53.528550    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:43:53.528561    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:43:53.542958    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:43:53.542967    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:43:53.557591    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:43:53.557601    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:43:53.569864    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:43:53.569875    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:43:53.582179    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:43:53.582190    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:43:53.600701    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:43:53.600710    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:43:53.612151    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:43:53.612160    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:43:53.625905    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:43:53.625916    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:43:56.153514    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:58.153853    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:58.154157    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:58.180136    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:43:58.180279    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:58.196416    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:43:58.196499    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:58.209558    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:43:58.209648    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:58.221026    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:43:58.221114    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:58.231606    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:43:58.231680    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:58.242268    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:43:58.242347    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:58.253109    8956 logs.go:276] 0 containers: []
	W0914 23:43:58.253124    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:58.253192    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:58.263561    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:43:58.263579    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:43:58.263585    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:58.275580    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:43:58.275594    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:43:58.287004    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:43:58.287015    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:43:58.312261    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:43:58.312273    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:43:58.324239    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:43:58.324249    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:43:58.341246    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:58.341256    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:58.345600    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:43:58.345609    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:43:58.356941    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:43:58.356951    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:43:58.368028    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:58.368042    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:58.394228    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:58.394238    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:58.422562    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:43:58.422570    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:43:58.437204    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:43:58.437214    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:43:58.452776    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:43:58.452786    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:43:58.470794    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:58.470803    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:58.507363    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:43:58.507372    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:43:58.521795    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:43:58.521803    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:43:58.534580    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:43:58.534591    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:44:01.155223    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:01.155504    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:01.183499    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:44:01.183649    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:01.201604    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:44:01.201705    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:01.214974    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:44:01.215065    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:01.226614    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:44:01.226695    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:01.238099    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:44:01.238182    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:01.248748    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:44:01.248838    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:01.258964    8967 logs.go:276] 0 containers: []
	W0914 23:44:01.258977    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:01.259052    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:01.271166    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:44:01.271182    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:44:01.271187    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:44:01.282956    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:44:01.282967    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:44:01.300420    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:01.300430    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:01.304927    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:44:01.304936    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:44:01.320716    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:44:01.320729    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:44:01.334905    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:44:01.334915    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:44:01.364671    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:44:01.364682    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:01.378905    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:01.378919    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:01.421636    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:01.421646    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:01.458063    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:44:01.458076    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:44:01.475227    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:44:01.475239    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:44:01.487311    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:44:01.487324    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:44:01.499467    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:01.499480    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:01.525537    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:44:01.525595    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:44:01.540858    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:44:01.540868    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:44:01.555937    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:44:01.555949    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:44:01.574246    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:44:01.574258    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:44:01.586185    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:44:01.586195    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:44:01.600445    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:44:01.600457    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:44:01.050407    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:04.121720    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:06.051217    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:06.051445    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:06.070734    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:44:06.070851    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:06.084664    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:44:06.084760    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:06.096834    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:44:06.096916    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:06.109090    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:44:06.109181    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:06.120103    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:44:06.120183    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:06.130530    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:44:06.130608    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:06.140971    8956 logs.go:276] 0 containers: []
	W0914 23:44:06.140984    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:06.141051    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:06.151576    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:44:06.151597    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:06.151602    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:06.156014    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:44:06.156020    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:44:06.169432    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:44:06.169446    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:44:06.186164    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:44:06.186175    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:44:06.199780    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:44:06.199794    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:44:06.217973    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:06.217983    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:06.256182    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:44:06.256198    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:44:06.268516    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:44:06.268528    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:44:06.293926    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:44:06.293937    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:44:06.305595    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:44:06.305608    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:06.317691    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:06.317702    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:06.345105    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:44:06.345113    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:44:06.359057    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:44:06.359071    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:44:06.378752    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:44:06.378762    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:44:06.390675    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:44:06.390687    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:44:06.406854    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:44:06.406866    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:44:06.418998    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:06.419008    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:09.123908    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:09.124025    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:09.141205    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:44:09.141296    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:09.151683    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:44:09.151767    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:09.161835    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:44:09.161917    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:09.172983    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:44:09.173060    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:09.183496    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:44:09.183588    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:09.193785    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:44:09.193872    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:09.204682    8967 logs.go:276] 0 containers: []
	W0914 23:44:09.204693    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:09.204761    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:09.215305    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:44:09.215320    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:44:09.215325    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:44:09.229398    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:44:09.229408    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:44:09.241488    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:44:09.241501    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:44:09.254107    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:44:09.254118    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:44:09.268482    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:44:09.268492    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:44:09.285471    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:44:09.285482    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:44:09.303290    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:09.303300    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:09.338239    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:44:09.338249    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:44:09.354932    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:44:09.354946    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:44:09.366445    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:09.366481    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:09.410486    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:44:09.410497    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:44:09.435454    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:44:09.435465    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:44:09.450358    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:44:09.450368    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:44:09.468302    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:44:09.468314    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:44:09.479191    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:09.479202    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:09.504053    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:09.504060    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:09.508163    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:44:09.508170    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:44:09.519621    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:44:09.519632    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:44:09.530828    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:44:09.530839    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:12.045613    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:08.944558    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:17.046539    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:17.046894    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:17.088970    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:44:17.089127    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:17.107800    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:44:17.107898    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:17.123973    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:44:17.124064    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:17.135699    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:44:17.135782    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:17.148050    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:44:17.148142    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:17.162743    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:44:17.162831    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:17.174585    8967 logs.go:276] 0 containers: []
	W0914 23:44:17.174600    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:17.174673    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:17.187168    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:44:17.187185    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:44:17.187191    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:44:17.198672    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:17.198682    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:17.222906    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:44:17.222914    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:17.235433    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:17.235444    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:17.239982    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:44:17.239989    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:44:17.251961    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:44:17.251971    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:44:17.263669    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:17.263679    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:17.311097    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:44:17.311106    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:44:17.324862    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:44:17.324872    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:44:17.342843    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:44:17.342853    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:44:17.354801    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:44:17.354811    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:44:17.375703    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:44:17.375716    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:44:17.388047    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:44:17.388057    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:44:17.402583    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:44:17.402596    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:44:17.417563    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:44:17.417573    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:44:17.429574    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:44:17.429585    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:44:17.446978    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:17.446987    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:17.482509    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:44:17.482524    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:44:17.508165    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:44:17.508176    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:44:13.945630    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:13.945829    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:13.962298    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:44:13.962395    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:13.979267    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:44:13.979353    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:13.990029    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:44:13.990106    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:14.001447    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:44:14.001534    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:14.011826    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:44:14.011904    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:14.023098    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:44:14.023177    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:14.033013    8956 logs.go:276] 0 containers: []
	W0914 23:44:14.033027    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:14.033093    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:14.047124    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:44:14.047148    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:14.047154    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:14.073972    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:44:14.073979    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:44:14.087713    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:44:14.087726    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:44:14.102078    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:44:14.102088    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:44:14.113159    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:14.113171    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:14.148685    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:44:14.148695    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:44:14.165393    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:44:14.165403    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:44:14.184231    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:44:14.184242    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:44:14.195850    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:14.195864    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:14.200360    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:44:14.200367    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:44:14.213731    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:44:14.213746    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:44:14.228080    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:14.228091    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:14.253619    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:44:14.253627    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:44:14.276818    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:44:14.276830    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:44:14.297395    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:44:14.297406    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:44:14.315273    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:44:14.315282    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:44:14.326433    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:44:14.326442    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:16.840154    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:20.023229    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:21.842527    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:21.842675    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:21.856673    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:44:21.856765    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:21.868811    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:44:21.868903    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:21.879386    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:44:21.879466    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:21.889576    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:44:21.889656    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:21.900404    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:44:21.900475    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:21.911476    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:44:21.911543    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:21.921554    8956 logs.go:276] 0 containers: []
	W0914 23:44:21.921566    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:21.921637    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:21.932307    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:44:21.932328    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:44:21.932334    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:44:21.947163    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:44:21.947172    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:44:21.969569    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:44:21.969579    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:44:21.984600    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:21.984611    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:22.012086    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:44:22.012094    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:44:22.025590    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:44:22.025600    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:44:22.039731    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:44:22.039744    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:44:22.051326    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:44:22.051338    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:44:22.067009    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:44:22.067018    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:44:22.085324    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:22.085336    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:22.120628    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:44:22.120639    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:44:22.134524    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:22.134534    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:22.138745    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:44:22.138753    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:44:22.152042    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:44:22.152052    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:44:22.169293    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:44:22.169303    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:44:22.180838    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:22.180848    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:22.205175    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:44:22.205184    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:25.023690    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:25.023966    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:25.057174    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:44:25.057333    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:25.074467    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:44:25.074566    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:25.088160    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:44:25.088255    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:25.099804    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:44:25.099891    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:25.110965    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:44:25.111048    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:25.123440    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:44:25.123525    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:25.133901    8967 logs.go:276] 0 containers: []
	W0914 23:44:25.133913    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:25.133977    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:25.144933    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:44:25.144948    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:25.144954    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:25.150038    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:44:25.150048    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:44:25.165370    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:44:25.165381    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:44:25.180576    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:44:25.180585    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:44:25.194083    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:44:25.194098    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:44:25.205695    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:25.205707    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:25.248904    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:25.248913    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:25.288404    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:44:25.288418    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:44:25.302559    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:44:25.302573    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:44:25.314193    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:44:25.314209    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:44:25.331577    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:44:25.331587    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:44:25.349795    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:25.349810    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:25.374302    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:44:25.374313    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:25.388021    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:44:25.388035    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:44:25.402948    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:44:25.402960    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:44:25.414370    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:44:25.414380    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:44:25.426050    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:44:25.426061    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:44:25.451783    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:44:25.451793    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:44:25.466521    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:44:25.466533    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:44:24.719241    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:27.978947    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:29.721861    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:29.722209    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:29.760637    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:44:29.760761    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:29.776387    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:44:29.776484    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:29.790725    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:44:29.790809    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:29.802379    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:44:29.802468    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:29.812824    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:44:29.812908    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:29.824001    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:44:29.824081    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:29.834185    8956 logs.go:276] 0 containers: []
	W0914 23:44:29.834197    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:29.834269    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:29.843954    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:44:29.843969    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:44:29.843974    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:44:29.863389    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:44:29.863399    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:44:29.876246    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:44:29.876256    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:44:29.887760    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:44:29.887774    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:44:29.899596    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:29.899610    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:29.935927    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:44:29.935938    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:44:29.950309    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:44:29.950322    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:44:29.964547    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:44:29.964559    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:44:29.987869    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:44:29.987883    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:44:29.998986    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:29.998995    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:30.022692    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:44:30.022704    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:30.035075    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:30.035088    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:30.064991    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:30.065003    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:30.069799    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:44:30.069807    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:44:30.085738    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:44:30.085749    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:44:30.103682    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:44:30.103696    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:44:30.117531    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:44:30.117543    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:44:32.631422    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:32.981454    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:32.981722    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:33.008735    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:44:33.008848    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:33.026020    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:44:33.026107    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:33.037020    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:44:33.037103    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:33.048301    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:44:33.048387    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:33.058922    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:44:33.059006    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:33.069954    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:44:33.070026    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:33.080274    8967 logs.go:276] 0 containers: []
	W0914 23:44:33.080285    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:33.080357    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:33.090716    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:44:33.090729    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:33.090734    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:33.130600    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:44:33.130612    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:44:33.156983    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:44:33.156996    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:44:33.168932    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:44:33.168943    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:44:33.184558    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:44:33.184569    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:44:33.206067    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:44:33.206078    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:44:33.226884    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:33.226896    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:33.271952    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:44:33.271972    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:44:33.295948    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:44:33.295964    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:44:33.321604    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:44:33.321617    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:44:33.335980    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:44:33.335990    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:44:33.347244    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:44:33.347254    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:44:33.359314    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:44:33.359324    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:44:33.371309    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:44:33.371323    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:44:33.388808    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:44:33.388821    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:33.401300    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:33.401310    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:33.409271    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:44:33.409279    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:44:33.423245    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:44:33.423256    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:44:33.435326    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:33.435338    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:35.961730    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:37.632868    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:37.633403    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:37.670682    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:44:37.670847    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:37.692410    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:44:37.692549    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:37.708669    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:44:37.708758    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:37.722881    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:44:37.722966    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:37.734742    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:44:37.734827    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:37.745698    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:44:37.745775    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:37.758051    8956 logs.go:276] 0 containers: []
	W0914 23:44:37.758065    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:37.758141    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:37.774110    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:44:37.774127    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:44:37.774132    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:44:37.793293    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:44:37.793305    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:44:37.810024    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:44:37.810036    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:44:37.823141    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:44:37.823152    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:44:37.834814    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:44:37.834826    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:44:37.846253    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:37.846263    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:37.870800    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:44:37.870808    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:37.882133    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:44:37.882168    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:44:37.897197    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:44:37.897212    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:44:37.912464    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:44:37.912474    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:44:37.935467    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:44:37.935480    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:44:37.951460    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:37.951473    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:37.980036    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:37.980045    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:37.984579    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:37.984588    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:38.020124    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:44:38.020139    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:44:38.033815    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:44:38.033826    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:44:38.047428    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:44:38.047439    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:44:40.964031    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:40.964404    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:40.996933    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:44:40.997107    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:41.018114    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:44:41.018224    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:41.032170    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:44:41.032256    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:41.047536    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:44:41.047616    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:41.058511    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:44:41.058594    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:41.069442    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:44:41.069523    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:41.084180    8967 logs.go:276] 0 containers: []
	W0914 23:44:41.084191    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:41.084257    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:41.095317    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:44:41.095335    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:41.095341    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:41.138832    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:44:41.138846    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:44:41.167720    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:44:41.167730    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:44:41.182147    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:44:41.182156    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:44:41.194525    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:44:41.194537    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:44:41.212523    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:44:41.212534    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:44:41.224393    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:41.224404    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:41.247788    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:41.247796    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:41.284609    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:44:41.284623    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:44:41.299502    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:44:41.299518    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:44:41.318154    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:44:41.318167    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:44:41.330379    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:44:41.330389    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:44:41.345065    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:44:41.345075    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:44:41.363735    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:44:41.363747    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:44:41.375037    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:41.375046    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:41.379389    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:44:41.379395    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:44:41.392739    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:44:41.392750    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:44:41.405542    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:44:41.405554    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:44:41.421354    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:44:41.421369    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:40.567829    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:43.935192    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:45.570312    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:45.570448    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:45.583620    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:44:45.583708    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:45.594103    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:44:45.594186    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:45.604714    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:44:45.604794    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:45.615654    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:44:45.615731    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:45.627757    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:44:45.627838    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:45.638594    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:44:45.638676    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:45.649276    8956 logs.go:276] 0 containers: []
	W0914 23:44:45.649292    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:45.649367    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:45.659494    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:44:45.659513    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:45.659519    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:45.664138    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:44:45.664147    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:44:45.676846    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:44:45.676856    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:44:45.691670    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:44:45.691681    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:44:45.711828    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:44:45.711839    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:44:45.729657    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:45.729668    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:45.753118    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:44:45.753126    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:44:45.768677    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:45.768687    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:45.803811    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:44:45.803822    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:44:45.817828    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:44:45.817841    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:44:45.829112    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:44:45.829123    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:44:45.841046    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:44:45.841057    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:44:45.852526    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:45.852536    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:45.885650    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:44:45.885667    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:44:45.908595    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:44:45.908612    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:44:45.926507    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:44:45.926517    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:44:45.938451    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:44:45.938462    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:48.452105    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:48.937409    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:48.937652    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:48.965175    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:44:48.965307    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:48.984689    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:44:48.984790    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:49.000306    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:44:49.000394    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:49.010860    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:44:49.010941    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:49.021105    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:44:49.021192    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:49.031969    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:44:49.032062    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:49.046742    8967 logs.go:276] 0 containers: []
	W0914 23:44:49.046754    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:49.046818    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:49.057344    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:44:49.057359    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:49.057364    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:49.097378    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:44:49.097389    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:44:49.112708    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:44:49.112725    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:44:49.131984    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:44:49.131994    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:44:49.149963    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:44:49.149973    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:44:49.161236    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:49.161246    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:49.185696    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:49.185702    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:49.190285    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:44:49.190291    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:44:49.204442    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:44:49.204452    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:44:49.219024    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:44:49.219035    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:44:49.230681    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:44:49.230693    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:44:49.242789    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:44:49.242802    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:44:49.259882    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:49.259895    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:49.304262    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:44:49.304271    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:44:49.330145    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:44:49.330156    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:44:49.343110    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:44:49.343118    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:44:49.356898    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:44:49.356908    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:44:49.375340    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:44:49.375353    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:44:49.388563    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:44:49.388574    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:51.902688    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:53.454395    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:53.454574    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:53.465429    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:44:53.465517    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:53.476079    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:44:53.476152    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:53.486570    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:44:53.486648    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:53.498812    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:44:53.498899    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:53.508941    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:44:53.509027    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:53.523079    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:44:53.523168    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:53.533234    8956 logs.go:276] 0 containers: []
	W0914 23:44:53.533247    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:53.533314    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:53.544268    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:44:53.544287    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:44:53.544292    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:44:53.556549    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:44:53.556584    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:53.568574    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:53.568585    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:53.603893    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:44:53.603906    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:44:53.616753    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:44:53.616764    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:44:53.630848    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:44:53.630863    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:44:53.654033    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:44:53.654043    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:44:53.666672    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:44:53.666682    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:44:53.683574    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:44:53.683584    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:44:53.695304    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:53.695316    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:53.718689    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:53.718696    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:53.745895    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:44:53.745910    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:44:56.904173    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:56.904502    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:56.934950    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:44:56.935084    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:56.952478    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:44:56.952590    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:56.967159    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:44:56.967255    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:56.978546    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:44:56.978634    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:56.988665    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:44:56.988743    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:56.999376    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:44:56.999459    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:57.009460    8967 logs.go:276] 0 containers: []
	W0914 23:44:57.009472    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:57.009543    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:57.020387    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:44:57.020403    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:57.020408    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:57.025533    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:44:57.025538    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:44:57.039580    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:44:57.039593    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:44:57.063881    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:44:57.063892    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:44:57.078735    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:44:57.078750    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:44:57.100301    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:57.100311    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:57.125805    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:44:57.125820    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:44:57.140142    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:44:57.140158    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:44:57.157369    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:44:57.157380    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:44:57.169006    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:57.169017    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:57.210950    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:57.210977    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:57.247238    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:44:57.247250    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:44:57.260955    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:44:57.260965    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:44:57.276469    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:44:57.276483    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:44:57.288244    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:44:57.288254    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:44:57.299840    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:44:57.299853    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:44:57.311448    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:44:57.311641    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:44:57.326987    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:44:57.327003    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:44:57.343636    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:44:57.343647    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:53.761291    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:44:53.761301    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:44:53.772414    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:44:53.772425    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:44:53.794144    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:53.794154    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:53.798358    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:44:53.798368    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:44:53.812089    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:44:53.812097    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:44:56.329944    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:59.858461    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:01.332143    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:01.332413    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:45:01.361277    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:45:01.361432    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:45:01.378488    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:45:01.378591    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:45:01.392035    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:45:01.392122    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:45:01.403564    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:45:01.403646    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:45:01.414785    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:45:01.414858    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:45:01.425059    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:45:01.425143    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:45:01.435423    8956 logs.go:276] 0 containers: []
	W0914 23:45:01.435430    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:45:01.435490    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:45:01.445581    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:45:01.445605    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:45:01.445610    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:45:01.457297    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:45:01.457310    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:45:01.493106    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:45:01.493117    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:45:01.504565    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:45:01.504577    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:45:01.527659    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:45:01.527670    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:45:01.546127    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:45:01.546137    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:45:01.564291    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:45:01.564305    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:45:01.583551    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:45:01.583561    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:45:01.597808    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:45:01.597821    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:45:01.609365    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:45:01.609380    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:45:01.636371    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:45:01.636378    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:45:01.640925    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:45:01.640931    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:45:01.666421    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:45:01.666436    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:45:01.678222    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:45:01.678232    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:45:01.694945    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:45:01.694956    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:45:01.710673    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:45:01.710689    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:45:01.722992    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:45:01.723002    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:45:04.860951    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:04.861240    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:45:04.891600    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:45:04.891746    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:45:04.908693    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:45:04.908789    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:45:04.921881    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:45:04.921959    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:45:04.932939    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:45:04.933005    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:45:04.943839    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:45:04.943923    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:45:04.954572    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:45:04.954645    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:45:04.968612    8967 logs.go:276] 0 containers: []
	W0914 23:45:04.968626    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:45:04.968695    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:45:04.979061    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:45:04.979078    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:45:04.979084    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:45:04.990163    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:45:04.990176    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:45:05.002133    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:45:05.002145    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:45:05.019002    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:45:05.019012    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:45:05.030597    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:45:05.030606    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:45:05.035230    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:45:05.035236    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:45:05.046490    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:45:05.046501    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:45:05.087993    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:45:05.088004    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:45:05.102162    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:45:05.102176    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:45:05.118597    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:45:05.118611    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:45:05.130332    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:45:05.130345    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:45:05.142526    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:45:05.142537    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:45:05.157843    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:45:05.157857    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:45:05.169530    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:45:05.169541    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:45:05.194313    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:45:05.194325    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:45:05.206141    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:45:05.206154    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:45:05.223079    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:45:05.223091    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:45:05.247317    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:45:05.247327    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:45:05.288205    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:45:05.288216    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:45:07.813351    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:04.249584    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:12.814568    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:12.814845    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:45:12.840502    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:45:12.840640    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:45:12.859051    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:45:12.859165    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:45:12.872400    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:45:12.872490    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:45:12.884155    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:45:12.884239    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:45:12.894676    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:45:12.894751    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:45:12.905488    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:45:12.905566    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:45:12.915996    8967 logs.go:276] 0 containers: []
	W0914 23:45:12.916005    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:45:12.916068    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:45:12.929335    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:45:12.929359    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:45:12.929365    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:45:09.251894    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:09.252158    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:45:09.279580    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:45:09.279711    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:45:09.297045    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:45:09.297157    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:45:09.310231    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:45:09.310308    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:45:09.323532    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:45:09.323624    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:45:09.334835    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:45:09.334913    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:45:09.345685    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:45:09.345771    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:45:09.356667    8956 logs.go:276] 0 containers: []
	W0914 23:45:09.356677    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:45:09.356748    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:45:09.367110    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:45:09.367135    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:45:09.367141    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:45:09.379014    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:45:09.379027    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:45:09.391688    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:45:09.391699    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:45:09.420951    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:45:09.420959    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:45:09.433562    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:45:09.433575    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:45:09.446933    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:45:09.446943    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:45:09.465503    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:45:09.465517    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:45:09.477209    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:45:09.477222    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:45:09.500867    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:45:09.500880    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:45:09.516525    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:45:09.516537    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:45:09.535782    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:45:09.535791    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:45:09.540198    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:45:09.540206    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:45:09.561575    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:45:09.561588    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:45:09.574418    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:45:09.574430    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:45:09.592710    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:45:09.592723    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:45:09.617762    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:45:09.617779    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:45:09.657534    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:45:09.657546    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:45:12.171783    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:12.972087    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:45:12.972097    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:45:12.976454    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:45:12.976462    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:45:12.990244    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:45:12.990253    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:45:13.004103    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:45:13.004112    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:45:13.029041    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:45:13.029053    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:45:13.040627    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:45:13.040639    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:45:13.052231    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:45:13.052245    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:45:13.071415    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:45:13.071426    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:45:13.085887    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:45:13.085901    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:45:13.097706    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:45:13.097716    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:45:13.109781    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:45:13.109797    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:45:13.129538    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:45:13.129554    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:45:13.142780    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:45:13.142792    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:45:13.178239    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:45:13.178249    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:45:13.189637    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:45:13.189649    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:45:13.211013    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:45:13.211024    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:45:13.229047    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:45:13.229058    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:45:13.252780    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:45:13.252786    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:45:15.766776    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:17.174031    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:17.174307    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:45:17.197913    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:45:17.198065    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:45:17.214057    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:45:17.214152    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:45:17.226836    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:45:17.226923    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:45:17.238220    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:45:17.238306    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:45:17.249098    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:45:17.249176    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:45:17.259846    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:45:17.259919    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:45:17.270544    8956 logs.go:276] 0 containers: []
	W0914 23:45:17.270555    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:45:17.270625    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:45:17.281159    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:45:17.281178    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:45:17.281184    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:45:17.304053    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:45:17.304067    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:45:17.321973    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:45:17.321982    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:45:17.346035    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:45:17.346041    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:45:17.374042    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:45:17.374051    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:45:17.378033    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:45:17.378039    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:45:17.392233    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:45:17.392243    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:45:17.426766    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:45:17.426776    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:45:17.440182    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:45:17.440193    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:45:17.451700    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:45:17.451711    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:45:17.463508    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:45:17.463519    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:45:17.474832    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:45:17.474843    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:45:17.486908    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:45:17.486918    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:45:17.501748    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:45:17.501759    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:45:17.523914    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:45:17.523924    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:45:17.536423    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:45:17.536435    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:45:17.551484    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:45:17.551496    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:45:20.769008    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:20.769255    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:45:20.794267    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:45:20.794389    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:45:20.808936    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:45:20.809031    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:45:20.820920    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:45:20.821003    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:45:20.831856    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:45:20.831938    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:45:20.843585    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:45:20.843660    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:45:20.854665    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:45:20.854755    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:45:20.864714    8967 logs.go:276] 0 containers: []
	W0914 23:45:20.864725    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:45:20.864791    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:45:20.875322    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:45:20.875339    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:45:20.875344    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:45:20.887872    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:45:20.887885    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:45:20.900993    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:45:20.901006    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:45:20.918699    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:45:20.918709    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:45:20.953641    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:45:20.953653    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:45:20.967415    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:45:20.967425    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:45:20.992408    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:45:20.992423    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:45:21.003905    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:45:21.003915    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:45:21.023619    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:45:21.023629    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:45:21.036109    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:45:21.036120    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:45:21.053690    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:45:21.053703    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:45:21.067835    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:45:21.067850    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:45:21.080580    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:45:21.080591    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:45:21.099559    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:45:21.099571    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:45:21.143892    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:45:21.143910    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:45:21.149006    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:45:21.149013    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:45:21.163943    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:45:21.163953    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:45:21.175966    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:45:21.175976    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:45:21.188081    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:45:21.188094    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:45:20.076154    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:23.713383    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:25.076907    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:25.077358    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:45:25.118700    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:45:25.118875    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:45:25.142915    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:45:25.143042    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:45:25.157633    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:45:25.157723    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:45:25.169574    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:45:25.169679    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:45:25.180240    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:45:25.180311    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:45:25.190889    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:45:25.190971    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:45:25.200991    8956 logs.go:276] 0 containers: []
	W0914 23:45:25.201001    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:45:25.201064    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:45:25.212068    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:45:25.212082    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:45:25.212088    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:45:25.231976    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:45:25.231989    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:45:25.243196    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:45:25.243207    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:45:25.254703    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:45:25.254716    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:45:25.268368    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:45:25.268381    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:45:25.291616    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:45:25.291626    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:45:25.303197    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:45:25.303207    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:45:25.333107    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:45:25.333118    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:45:25.345643    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:45:25.345653    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:45:25.361469    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:45:25.361479    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:45:25.379449    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:45:25.379459    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:45:25.396805    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:45:25.396815    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:45:25.420295    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:45:25.420303    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:45:25.424281    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:45:25.424291    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:45:25.466440    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:45:25.466451    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:45:25.481071    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:45:25.481081    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:45:25.499117    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:45:25.499127    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:45:28.013168    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:28.715573    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:28.715738    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:45:28.730003    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:45:28.730097    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:45:28.741832    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:45:28.741915    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:45:28.752446    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:45:28.752524    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:45:28.763888    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:45:28.763972    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:45:28.774392    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:45:28.774475    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:45:28.785195    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:45:28.785268    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:45:28.796437    8967 logs.go:276] 0 containers: []
	W0914 23:45:28.796447    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:45:28.796515    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:45:28.807634    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:45:28.807649    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:45:28.807654    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:45:28.825761    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:45:28.825771    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:45:28.839565    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:45:28.839581    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:45:28.844507    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:45:28.844513    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:45:28.862728    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:45:28.862738    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:45:28.874011    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:45:28.874021    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:45:28.886419    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:45:28.886431    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:45:28.904568    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:45:28.904578    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:45:28.929178    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:45:28.929186    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:45:28.942576    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:45:28.942586    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:45:28.977670    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:45:28.977681    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:45:29.004798    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:45:29.004809    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:45:29.016439    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:45:29.016451    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:45:29.028519    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:45:29.028534    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:45:29.042976    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:45:29.042987    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:45:29.057848    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:45:29.057861    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:45:29.072480    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:45:29.072490    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:45:29.113597    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:45:29.113608    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:45:29.125328    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:45:29.125338    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:45:31.639215    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:33.015533    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:33.016107    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:45:33.059431    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:45:33.059595    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:45:33.080589    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:45:33.080726    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:45:33.096205    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:45:33.096300    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:45:33.109096    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:45:33.109190    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:45:33.120295    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:45:33.120373    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:45:33.132789    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:45:33.132876    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:45:33.143085    8956 logs.go:276] 0 containers: []
	W0914 23:45:33.143100    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:45:33.143169    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:45:33.153651    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:45:33.153668    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:45:33.153673    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:45:33.180309    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:45:33.180318    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:45:33.214915    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:45:33.214925    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:45:33.230938    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:45:33.230950    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:45:33.253361    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:45:33.253369    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:45:33.257374    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:45:33.257381    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:45:33.284982    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:45:33.284992    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:45:33.309991    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:45:33.310001    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:45:33.322000    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:45:33.322011    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:45:33.336258    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:45:33.336268    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:45:33.347883    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:45:33.347894    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:45:33.365961    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:45:33.365973    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:45:33.384161    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:45:33.384171    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:45:33.396442    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:45:33.396452    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:45:33.409504    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:45:33.409520    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:45:33.428217    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:45:33.428227    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:45:33.442764    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:45:33.442773    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:45:36.641782    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:36.641992    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:45:36.661871    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:45:36.661989    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:45:36.675811    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:45:36.675908    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:45:36.688439    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:45:36.688531    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:45:36.701724    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:45:36.701808    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:45:36.712133    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:45:36.712213    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:45:36.730114    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:45:36.730204    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:45:36.742212    8967 logs.go:276] 0 containers: []
	W0914 23:45:36.742223    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:45:36.742294    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:45:36.753758    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:45:36.753775    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:45:36.753780    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:45:36.768100    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:45:36.768111    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:45:36.783758    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:45:36.783768    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:45:36.795784    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:45:36.795794    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:45:36.800625    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:45:36.800631    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:45:36.814016    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:45:36.814024    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:45:36.840468    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:45:36.840480    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:45:36.855010    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:45:36.855021    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:45:36.866767    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:45:36.866783    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:45:36.877791    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:45:36.877803    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:45:36.890356    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:45:36.890369    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:45:36.929727    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:45:36.929741    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:45:36.943868    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:45:36.943882    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:45:36.957816    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:45:36.957827    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:45:36.999284    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:45:36.999298    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:45:37.011734    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:45:37.011749    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:45:37.023945    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:45:37.023955    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:45:37.041973    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:45:37.041984    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:45:37.060128    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:45:37.060138    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:45:35.956055    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:40.958362    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:40.958635    8956 kubeadm.go:597] duration metric: took 4m3.64337675s to restartPrimaryControlPlane
	W0914 23:45:40.958760    8956 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 23:45:40.958801    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0914 23:45:41.969705    8956 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.010909s)
	I0914 23:45:41.969780    8956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 23:45:41.974802    8956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 23:45:41.977738    8956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 23:45:41.980593    8956 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 23:45:41.980600    8956 kubeadm.go:157] found existing configuration files:
	
	I0914 23:45:41.980628    8956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/admin.conf
	I0914 23:45:41.983845    8956 kubeadm.go:163] "https://control-plane.minikube.internal:51261" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 23:45:41.983878    8956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 23:45:41.987102    8956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/kubelet.conf
	I0914 23:45:41.990024    8956 kubeadm.go:163] "https://control-plane.minikube.internal:51261" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 23:45:41.990048    8956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 23:45:41.992695    8956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/controller-manager.conf
	I0914 23:45:41.995737    8956 kubeadm.go:163] "https://control-plane.minikube.internal:51261" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 23:45:41.995763    8956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 23:45:41.998879    8956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/scheduler.conf
	I0914 23:45:42.001394    8956 kubeadm.go:163] "https://control-plane.minikube.internal:51261" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 23:45:42.001422    8956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 23:45:42.004123    8956 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 23:45:42.022997    8956 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0914 23:45:42.023029    8956 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 23:45:42.075339    8956 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 23:45:42.075394    8956 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 23:45:42.075440    8956 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 23:45:42.123962    8956 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 23:45:42.128304    8956 out.go:235]   - Generating certificates and keys ...
	I0914 23:45:42.128347    8956 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 23:45:42.128389    8956 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 23:45:42.128433    8956 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 23:45:42.128469    8956 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 23:45:42.128512    8956 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 23:45:42.128539    8956 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 23:45:42.128578    8956 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 23:45:42.128607    8956 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 23:45:42.128640    8956 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 23:45:42.128677    8956 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 23:45:42.128696    8956 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 23:45:42.128729    8956 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 23:45:42.386689    8956 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 23:45:42.521657    8956 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 23:45:42.574615    8956 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 23:45:42.745170    8956 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 23:45:42.776403    8956 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 23:45:42.776702    8956 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 23:45:42.776731    8956 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 23:45:42.845138    8956 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 23:45:39.584003    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:42.850063    8956 out.go:235]   - Booting up control plane ...
	I0914 23:45:42.850113    8956 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 23:45:42.850178    8956 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 23:45:42.850222    8956 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 23:45:42.850279    8956 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 23:45:42.850353    8956 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 23:45:44.586554    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:44.586684    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:45:44.598019    8967 logs.go:276] 2 containers: [5367cbee1a41 8b51eca867bc]
	I0914 23:45:44.598115    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:45:44.609592    8967 logs.go:276] 2 containers: [6cd07b9ac53e e723f75a5293]
	I0914 23:45:44.609687    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:45:44.621783    8967 logs.go:276] 2 containers: [b5ca5970c445 05233d01ab13]
	I0914 23:45:44.621861    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:45:44.633046    8967 logs.go:276] 2 containers: [72797592ce54 ed8d75c41830]
	I0914 23:45:44.633136    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:45:44.644375    8967 logs.go:276] 2 containers: [335097c529be d971e5f7858d]
	I0914 23:45:44.644459    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:45:44.655046    8967 logs.go:276] 2 containers: [88dcdf6c46e8 e456f04c65a9]
	I0914 23:45:44.655133    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:45:44.665631    8967 logs.go:276] 0 containers: []
	W0914 23:45:44.665643    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:45:44.665714    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:45:44.677244    8967 logs.go:276] 2 containers: [d06bb56d204a cdbc600fbbb4]
	I0914 23:45:44.677261    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:45:44.677266    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:45:44.720674    8967 logs.go:123] Gathering logs for kube-scheduler [ed8d75c41830] ...
	I0914 23:45:44.720695    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed8d75c41830"
	I0914 23:45:44.737306    8967 logs.go:123] Gathering logs for kube-proxy [335097c529be] ...
	I0914 23:45:44.737317    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 335097c529be"
	I0914 23:45:44.752160    8967 logs.go:123] Gathering logs for kube-controller-manager [88dcdf6c46e8] ...
	I0914 23:45:44.752173    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88dcdf6c46e8"
	I0914 23:45:44.770891    8967 logs.go:123] Gathering logs for kube-controller-manager [e456f04c65a9] ...
	I0914 23:45:44.770901    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e456f04c65a9"
	I0914 23:45:44.789709    8967 logs.go:123] Gathering logs for storage-provisioner [cdbc600fbbb4] ...
	I0914 23:45:44.789719    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbc600fbbb4"
	I0914 23:45:44.801258    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:45:44.801270    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:45:44.818127    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:45:44.818140    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:45:44.822608    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:45:44.822615    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:45:44.862640    8967 logs.go:123] Gathering logs for kube-apiserver [8b51eca867bc] ...
	I0914 23:45:44.862651    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b51eca867bc"
	I0914 23:45:44.892343    8967 logs.go:123] Gathering logs for etcd [e723f75a5293] ...
	I0914 23:45:44.892357    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e723f75a5293"
	I0914 23:45:44.907874    8967 logs.go:123] Gathering logs for coredns [b5ca5970c445] ...
	I0914 23:45:44.907884    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ca5970c445"
	I0914 23:45:44.919436    8967 logs.go:123] Gathering logs for storage-provisioner [d06bb56d204a] ...
	I0914 23:45:44.919446    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06bb56d204a"
	I0914 23:45:44.931069    8967 logs.go:123] Gathering logs for kube-apiserver [5367cbee1a41] ...
	I0914 23:45:44.931080    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5367cbee1a41"
	I0914 23:45:44.945023    8967 logs.go:123] Gathering logs for etcd [6cd07b9ac53e] ...
	I0914 23:45:44.945038    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd07b9ac53e"
	I0914 23:45:44.958595    8967 logs.go:123] Gathering logs for coredns [05233d01ab13] ...
	I0914 23:45:44.958605    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05233d01ab13"
	I0914 23:45:44.970348    8967 logs.go:123] Gathering logs for kube-scheduler [72797592ce54] ...
	I0914 23:45:44.970359    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72797592ce54"
	I0914 23:45:44.981894    8967 logs.go:123] Gathering logs for kube-proxy [d971e5f7858d] ...
	I0914 23:45:44.981906    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d971e5f7858d"
	I0914 23:45:44.994872    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:45:44.994884    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:45:47.521802    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:47.350217    8956 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501765 seconds
	I0914 23:45:47.350278    8956 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 23:45:47.354438    8956 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 23:45:47.865209    8956 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 23:45:47.865412    8956 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-438000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 23:45:48.369910    8956 kubeadm.go:310] [bootstrap-token] Using token: bbd4ls.6ujjfp6cj079ummm
	I0914 23:45:48.376220    8956 out.go:235]   - Configuring RBAC rules ...
	I0914 23:45:48.376292    8956 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 23:45:48.376357    8956 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 23:45:48.378084    8956 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 23:45:48.382867    8956 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 23:45:48.383860    8956 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 23:45:48.384647    8956 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 23:45:48.387971    8956 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 23:45:48.527004    8956 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 23:45:48.774337    8956 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 23:45:48.774802    8956 kubeadm.go:310] 
	I0914 23:45:48.774835    8956 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 23:45:48.774841    8956 kubeadm.go:310] 
	I0914 23:45:48.774882    8956 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 23:45:48.774885    8956 kubeadm.go:310] 
	I0914 23:45:48.774897    8956 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 23:45:48.774944    8956 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 23:45:48.774980    8956 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 23:45:48.774984    8956 kubeadm.go:310] 
	I0914 23:45:48.775019    8956 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 23:45:48.775022    8956 kubeadm.go:310] 
	I0914 23:45:48.775047    8956 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 23:45:48.775049    8956 kubeadm.go:310] 
	I0914 23:45:48.775082    8956 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 23:45:48.775118    8956 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 23:45:48.775150    8956 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 23:45:48.775153    8956 kubeadm.go:310] 
	I0914 23:45:48.775202    8956 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 23:45:48.775245    8956 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 23:45:48.775250    8956 kubeadm.go:310] 
	I0914 23:45:48.775294    8956 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bbd4ls.6ujjfp6cj079ummm \
	I0914 23:45:48.775354    8956 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3496b266fd1cfe9142221ef290f09745f4c6a279684c03f4e3160434112e5d40 \
	I0914 23:45:48.775364    8956 kubeadm.go:310] 	--control-plane 
	I0914 23:45:48.775368    8956 kubeadm.go:310] 
	I0914 23:45:48.775415    8956 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 23:45:48.775419    8956 kubeadm.go:310] 
	I0914 23:45:48.775466    8956 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bbd4ls.6ujjfp6cj079ummm \
	I0914 23:45:48.775525    8956 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3496b266fd1cfe9142221ef290f09745f4c6a279684c03f4e3160434112e5d40 
	I0914 23:45:48.775686    8956 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 23:45:48.775735    8956 cni.go:84] Creating CNI manager for ""
	I0914 23:45:48.775745    8956 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:45:48.779043    8956 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 23:45:48.785970    8956 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 23:45:48.789127    8956 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 23:45:48.793719    8956 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 23:45:48.793808    8956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-438000 minikube.k8s.io/updated_at=2024_09_14T23_45_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a minikube.k8s.io/name=stopped-upgrade-438000 minikube.k8s.io/primary=true
	I0914 23:45:48.793849    8956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:45:48.796803    8956 ops.go:34] apiserver oom_adj: -16
	I0914 23:45:48.840233    8956 kubeadm.go:1113] duration metric: took 46.488ms to wait for elevateKubeSystemPrivileges
	I0914 23:45:48.840257    8956 kubeadm.go:394] duration metric: took 4m11.538652625s to StartCluster
	I0914 23:45:48.840271    8956 settings.go:142] acquiring lock: {Name:mk03c42e45b73d6f59721a178a8a31fc79d22668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:45:48.840427    8956 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:45:48.840843    8956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/kubeconfig: {Name:mke334fd43bb51604954449e74caf7f81dee5b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:45:48.841061    8956 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:45:48.841095    8956 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 23:45:48.841137    8956 config.go:182] Loaded profile config "stopped-upgrade-438000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 23:45:48.841140    8956 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-438000"
	I0914 23:45:48.841148    8956 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-438000"
	W0914 23:45:48.841151    8956 addons.go:243] addon storage-provisioner should already be in state true
	I0914 23:45:48.841164    8956 host.go:66] Checking if "stopped-upgrade-438000" exists ...
	I0914 23:45:48.841171    8956 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-438000"
	I0914 23:45:48.841184    8956 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-438000"
	I0914 23:45:48.845082    8956 out.go:177] * Verifying Kubernetes components...
	I0914 23:45:48.845713    8956 kapi.go:59] client config for stopped-upgrade-438000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/client.key", CAFile:"/Users/jenkins/minikube-integration/19644-6577/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104949800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 23:45:48.848276    8956 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-438000"
	W0914 23:45:48.848280    8956 addons.go:243] addon default-storageclass should already be in state true
	I0914 23:45:48.848287    8956 host.go:66] Checking if "stopped-upgrade-438000" exists ...
	I0914 23:45:48.848819    8956 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 23:45:48.848824    8956 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 23:45:48.848829    8956 sshutil.go:53] new ssh client: &{IP:localhost Port:51229 SSHKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/stopped-upgrade-438000/id_rsa Username:docker}
	I0914 23:45:48.850956    8956 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:45:52.522246    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:52.522314    8967 kubeadm.go:597] duration metric: took 4m7.51233775s to restartPrimaryControlPlane
	W0914 23:45:52.522373    8967 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 23:45:52.522406    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0914 23:45:48.852277    8956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:45:48.855077    8956 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 23:45:48.855093    8956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 23:45:48.855107    8956 sshutil.go:53] new ssh client: &{IP:localhost Port:51229 SSHKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/stopped-upgrade-438000/id_rsa Username:docker}
	I0914 23:45:48.924370    8956 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 23:45:48.929653    8956 api_server.go:52] waiting for apiserver process to appear ...
	I0914 23:45:48.929702    8956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:45:48.934373    8956 api_server.go:72] duration metric: took 93.30125ms to wait for apiserver process to appear ...
	I0914 23:45:48.934382    8956 api_server.go:88] waiting for apiserver healthz status ...
	I0914 23:45:48.934391    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:48.939422    8956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 23:45:48.955804    8956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 23:45:49.297197    8956 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0914 23:45:49.297209    8956 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0914 23:45:53.608323    8967 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.085925667s)
	I0914 23:45:53.608424    8967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 23:45:53.613554    8967 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 23:45:53.616385    8967 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 23:45:53.619931    8967 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 23:45:53.619937    8967 kubeadm.go:157] found existing configuration files:
	
	I0914 23:45:53.619970    8967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/admin.conf
	I0914 23:45:53.623171    8967 kubeadm.go:163] "https://control-plane.minikube.internal:51345" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 23:45:53.623205    8967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 23:45:53.626172    8967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/kubelet.conf
	I0914 23:45:53.629346    8967 kubeadm.go:163] "https://control-plane.minikube.internal:51345" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 23:45:53.629385    8967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 23:45:53.632377    8967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/controller-manager.conf
	I0914 23:45:53.635756    8967 kubeadm.go:163] "https://control-plane.minikube.internal:51345" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 23:45:53.635793    8967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 23:45:53.638958    8967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/scheduler.conf
	I0914 23:45:53.641567    8967 kubeadm.go:163] "https://control-plane.minikube.internal:51345" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51345 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 23:45:53.641599    8967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 23:45:53.644461    8967 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 23:45:53.661283    8967 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0914 23:45:53.661312    8967 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 23:45:53.711251    8967 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 23:45:53.711315    8967 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 23:45:53.711404    8967 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 23:45:53.760001    8967 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 23:45:53.765340    8967 out.go:235]   - Generating certificates and keys ...
	I0914 23:45:53.765373    8967 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 23:45:53.765412    8967 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 23:45:53.765460    8967 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 23:45:53.765494    8967 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 23:45:53.765529    8967 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 23:45:53.765567    8967 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 23:45:53.765607    8967 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 23:45:53.765643    8967 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 23:45:53.765684    8967 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 23:45:53.765725    8967 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 23:45:53.765752    8967 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 23:45:53.765783    8967 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 23:45:53.818424    8967 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 23:45:53.897558    8967 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 23:45:53.975307    8967 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 23:45:54.095769    8967 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 23:45:54.129079    8967 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 23:45:54.129451    8967 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 23:45:54.129533    8967 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 23:45:54.216611    8967 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 23:45:54.220841    8967 out.go:235]   - Booting up control plane ...
	I0914 23:45:54.220894    8967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 23:45:54.220937    8967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 23:45:54.221658    8967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 23:45:54.221998    8967 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 23:45:54.222835    8967 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 23:45:53.936359    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:53.936377    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:58.724700    8967 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501784 seconds
	I0914 23:45:58.724770    8967 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 23:45:58.728505    8967 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 23:45:59.237566    8967 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 23:45:59.237698    8967 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-386000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 23:45:59.741749    8967 kubeadm.go:310] [bootstrap-token] Using token: cl4op1.2r209r77gn303h2a
	I0914 23:45:59.748950    8967 out.go:235]   - Configuring RBAC rules ...
	I0914 23:45:59.749017    8967 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 23:45:59.749063    8967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 23:45:59.754608    8967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 23:45:59.755458    8967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 23:45:59.756423    8967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 23:45:59.757336    8967 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 23:45:59.760629    8967 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 23:45:59.953060    8967 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 23:46:00.145269    8967 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 23:46:00.145875    8967 kubeadm.go:310] 
	I0914 23:46:00.145911    8967 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 23:46:00.145916    8967 kubeadm.go:310] 
	I0914 23:46:00.145970    8967 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 23:46:00.145975    8967 kubeadm.go:310] 
	I0914 23:46:00.145990    8967 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 23:46:00.146036    8967 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 23:46:00.146076    8967 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 23:46:00.146080    8967 kubeadm.go:310] 
	I0914 23:46:00.146108    8967 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 23:46:00.146111    8967 kubeadm.go:310] 
	I0914 23:46:00.146142    8967 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 23:46:00.146145    8967 kubeadm.go:310] 
	I0914 23:46:00.146173    8967 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 23:46:00.146214    8967 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 23:46:00.146261    8967 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 23:46:00.146264    8967 kubeadm.go:310] 
	I0914 23:46:00.146307    8967 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 23:46:00.146343    8967 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 23:46:00.146345    8967 kubeadm.go:310] 
	I0914 23:46:00.146386    8967 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cl4op1.2r209r77gn303h2a \
	I0914 23:46:00.146444    8967 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3496b266fd1cfe9142221ef290f09745f4c6a279684c03f4e3160434112e5d40 \
	I0914 23:46:00.146465    8967 kubeadm.go:310] 	--control-plane 
	I0914 23:46:00.146469    8967 kubeadm.go:310] 
	I0914 23:46:00.146525    8967 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 23:46:00.146534    8967 kubeadm.go:310] 
	I0914 23:46:00.146578    8967 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cl4op1.2r209r77gn303h2a \
	I0914 23:46:00.146650    8967 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3496b266fd1cfe9142221ef290f09745f4c6a279684c03f4e3160434112e5d40 
	I0914 23:46:00.146721    8967 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 23:46:00.146729    8967 cni.go:84] Creating CNI manager for ""
	I0914 23:46:00.146738    8967 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:46:00.155435    8967 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 23:46:00.159611    8967 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 23:46:00.162644    8967 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 23:46:00.167402    8967 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 23:46:00.167450    8967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:46:00.167467    8967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-386000 minikube.k8s.io/updated_at=2024_09_14T23_46_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a minikube.k8s.io/name=running-upgrade-386000 minikube.k8s.io/primary=true
	I0914 23:46:00.208621    8967 kubeadm.go:1113] duration metric: took 41.211375ms to wait for elevateKubeSystemPrivileges
	I0914 23:46:00.208650    8967 ops.go:34] apiserver oom_adj: -16
	I0914 23:46:00.208655    8967 kubeadm.go:394] duration metric: took 4m15.213571292s to StartCluster
	I0914 23:46:00.208665    8967 settings.go:142] acquiring lock: {Name:mk03c42e45b73d6f59721a178a8a31fc79d22668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:46:00.208758    8967 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:46:00.209188    8967 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/kubeconfig: {Name:mke334fd43bb51604954449e74caf7f81dee5b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:46:00.209389    8967 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:46:00.209444    8967 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 23:46:00.209483    8967 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-386000"
	I0914 23:46:00.209493    8967 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-386000"
	W0914 23:46:00.209496    8967 addons.go:243] addon storage-provisioner should already be in state true
	I0914 23:46:00.209505    8967 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-386000"
	I0914 23:46:00.209509    8967 host.go:66] Checking if "running-upgrade-386000" exists ...
	I0914 23:46:00.209514    8967 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-386000"
	I0914 23:46:00.209514    8967 config.go:182] Loaded profile config "running-upgrade-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 23:46:00.209796    8967 retry.go:31] will retry after 1.311000543s: connect: dial unix /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/running-upgrade-386000/monitor: connect: connection refused
	I0914 23:46:00.210467    8967 kapi.go:59] client config for running-upgrade-386000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/running-upgrade-386000/client.key", CAFile:"/Users/jenkins/minikube-integration/19644-6577/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106291800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 23:46:00.210616    8967 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-386000"
	W0914 23:46:00.210622    8967 addons.go:243] addon default-storageclass should already be in state true
	I0914 23:46:00.210629    8967 host.go:66] Checking if "running-upgrade-386000" exists ...
	I0914 23:46:00.211179    8967 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 23:46:00.211185    8967 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 23:46:00.211191    8967 sshutil.go:53] new ssh client: &{IP:localhost Port:51266 SSHKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/running-upgrade-386000/id_rsa Username:docker}
	I0914 23:46:00.213585    8967 out.go:177] * Verifying Kubernetes components...
	I0914 23:46:00.219608    8967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:46:00.315110    8967 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 23:46:00.319842    8967 api_server.go:52] waiting for apiserver process to appear ...
	I0914 23:46:00.319892    8967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:46:00.324000    8967 api_server.go:72] duration metric: took 114.602042ms to wait for apiserver process to appear ...
	I0914 23:46:00.324008    8967 api_server.go:88] waiting for apiserver healthz status ...
	I0914 23:46:00.324016    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:00.389537    8967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 23:46:00.695077    8967 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0914 23:46:00.695090    8967 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0914 23:46:01.527529    8967 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:46:01.531510    8967 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 23:46:01.531517    8967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 23:46:01.531525    8967 sshutil.go:53] new ssh client: &{IP:localhost Port:51266 SSHKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/running-upgrade-386000/id_rsa Username:docker}
	I0914 23:46:01.572903    8967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 23:45:58.936502    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:58.936535    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:05.325720    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:05.325753    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:03.936771    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:03.936811    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:10.325919    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:10.325961    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:08.937158    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:08.937180    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:15.326584    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:15.326603    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:13.937601    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:13.937643    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:18.938037    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:18.938099    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0914 23:46:19.298938    8956 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0914 23:46:19.303568    8956 out.go:177] * Enabled addons: storage-provisioner
	I0914 23:46:20.326899    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:20.326929    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:19.311415    8956 addons.go:510] duration metric: took 30.470910625s for enable addons: enabled=[storage-provisioner]
	I0914 23:46:25.327332    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:25.327370    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:23.939335    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:23.939373    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:30.328030    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:30.328068    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0914 23:46:30.696887    8967 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0914 23:46:30.701097    8967 out.go:177] * Enabled addons: storage-provisioner
	I0914 23:46:30.708951    8967 addons.go:510] duration metric: took 30.500117625s for enable addons: enabled=[storage-provisioner]
	I0914 23:46:28.940450    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:28.940486    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:35.328890    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:35.328930    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:33.941698    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:33.941739    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:40.330019    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:40.330057    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:38.942037    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:38.942079    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:45.331422    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:45.331449    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:43.943607    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:43.943629    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:50.332665    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:50.332694    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:48.945724    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:48.945841    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:46:48.957863    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:46:48.957947    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:46:48.967968    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:46:48.968052    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:46:48.978470    8956 logs.go:276] 2 containers: [f166433f26b6 1f072a646288]
	I0914 23:46:48.978556    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:46:48.989446    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:46:48.989528    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:46:48.999667    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:46:48.999751    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:46:49.010326    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:46:49.010408    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:46:49.020064    8956 logs.go:276] 0 containers: []
	W0914 23:46:49.020074    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:46:49.020135    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:46:49.030469    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:46:49.030487    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:46:49.030493    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:46:49.066850    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:46:49.066862    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:46:49.081616    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:46:49.081635    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:46:49.093901    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:46:49.093912    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:46:49.106202    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:46:49.106212    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:46:49.123899    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:46:49.123908    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:46:49.135437    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:46:49.135447    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:46:49.165675    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:46:49.165686    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:46:49.169540    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:46:49.169549    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:46:49.183090    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:46:49.183102    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:46:49.199343    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:46:49.199354    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:46:49.214715    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:46:49.214730    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:46:49.226184    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:46:49.226197    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:46:51.753646    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:55.332972    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:55.333026    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:56.756146    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:56.756342    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:46:56.771047    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:46:56.771141    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:46:56.782922    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:46:56.783010    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:46:56.794238    8956 logs.go:276] 2 containers: [f166433f26b6 1f072a646288]
	I0914 23:46:56.794316    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:46:56.805463    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:46:56.805545    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:46:56.815902    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:46:56.815972    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:46:56.826794    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:46:56.826872    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:46:56.838071    8956 logs.go:276] 0 containers: []
	W0914 23:46:56.838084    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:46:56.838154    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:46:56.849841    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:46:56.849856    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:46:56.849861    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:46:56.867788    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:46:56.867797    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:46:56.879922    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:46:56.879933    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:46:56.884401    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:46:56.884411    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:46:56.898816    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:46:56.898825    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:46:56.914789    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:46:56.914805    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:46:56.930275    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:46:56.930286    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:46:56.942698    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:46:56.942708    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:46:56.968243    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:46:56.968254    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:46:56.979683    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:46:56.979693    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:46:57.010045    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:46:57.010053    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:46:57.045029    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:46:57.045045    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:46:57.057731    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:46:57.057741    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:47:00.335245    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:00.335425    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:00.346411    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:47:00.346498    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:00.357205    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:47:00.357291    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:00.367801    8967 logs.go:276] 2 containers: [d3fe5f4d4d8c 761ee539a978]
	I0914 23:47:00.367879    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:00.378210    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:47:00.378282    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:00.389090    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:47:00.389171    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:00.399558    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:47:00.399649    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:00.410712    8967 logs.go:276] 0 containers: []
	W0914 23:47:00.410723    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:00.410798    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:00.421668    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:47:00.421686    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:47:00.421691    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:47:00.433545    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:00.433559    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:00.458371    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:00.458379    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:00.494497    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:47:00.494509    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:47:00.512129    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:47:00.512139    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:47:00.525607    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:47:00.525618    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:47:00.537312    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:47:00.537326    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:47:00.548731    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:47:00.548747    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:47:00.564060    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:47:00.564071    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:00.575577    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:00.575592    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:00.612742    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:00.612753    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:00.617282    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:47:00.617288    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:47:00.635564    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:47:00.635575    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:46:59.571854    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:03.151877    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:04.574179    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:04.574416    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:04.594930    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:47:04.595069    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:04.614185    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:47:04.614276    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:04.625749    8956 logs.go:276] 2 containers: [f166433f26b6 1f072a646288]
	I0914 23:47:04.625831    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:04.636177    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:47:04.636261    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:04.646321    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:47:04.646405    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:04.656556    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:47:04.656633    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:04.666806    8956 logs.go:276] 0 containers: []
	W0914 23:47:04.666825    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:04.666899    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:04.677294    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:47:04.677308    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:47:04.677314    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:04.688375    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:04.688386    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:04.692614    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:47:04.692620    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:47:04.706305    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:47:04.706316    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:47:04.718768    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:47:04.718778    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:47:04.732843    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:04.732857    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:04.757501    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:47:04.757511    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:47:04.769141    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:47:04.769152    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:47:04.786719    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:04.786729    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:04.819324    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:04.819337    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:04.852778    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:47:04.852789    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:47:04.867068    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:47:04.867082    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:47:04.880792    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:47:04.880805    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:47:07.399590    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:08.154009    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:08.154137    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:08.165538    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:47:08.165625    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:08.177718    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:47:08.177805    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:08.191193    8967 logs.go:276] 2 containers: [d3fe5f4d4d8c 761ee539a978]
	I0914 23:47:08.191269    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:08.201691    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:47:08.201772    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:08.212373    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:47:08.212456    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:08.222577    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:47:08.222665    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:08.233362    8967 logs.go:276] 0 containers: []
	W0914 23:47:08.233375    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:08.233452    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:08.244125    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:47:08.244157    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:47:08.244162    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:08.255439    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:08.255449    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:08.292485    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:47:08.292494    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:47:08.315726    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:47:08.315737    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:47:08.328187    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:47:08.328201    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:47:08.340817    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:47:08.340828    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:47:08.352490    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:47:08.352500    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:47:08.374121    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:08.374133    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:08.397262    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:08.397270    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:08.401613    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:08.401621    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:08.436320    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:47:08.436331    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:47:08.456159    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:47:08.456170    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:47:08.467893    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:47:08.467904    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:47:10.986514    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:12.404702    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:12.404848    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:12.419057    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:47:12.419141    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:12.430333    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:47:12.430429    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:12.441206    8956 logs.go:276] 2 containers: [f166433f26b6 1f072a646288]
	I0914 23:47:12.441290    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:12.452165    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:47:12.452252    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:12.463125    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:47:12.463212    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:12.474032    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:47:12.474117    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:12.484122    8956 logs.go:276] 0 containers: []
	W0914 23:47:12.484133    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:12.484204    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:12.494648    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:47:12.494663    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:47:12.494668    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:47:12.506270    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:47:12.506280    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:47:12.524119    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:47:12.524128    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:47:12.535718    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:47:12.535728    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:47:12.550740    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:47:12.550750    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:47:12.563026    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:12.563036    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:12.595123    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:12.595131    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:12.599291    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:12.599299    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:12.640494    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:47:12.640515    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:47:12.655367    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:47:12.655377    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:47:12.669910    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:47:12.669920    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:47:12.681330    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:12.681341    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:12.704919    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:47:12.704927    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:15.995090    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:15.995298    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:16.016956    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:47:16.017049    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:16.030661    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:47:16.030750    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:16.043866    8967 logs.go:276] 2 containers: [d3fe5f4d4d8c 761ee539a978]
	I0914 23:47:16.043949    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:16.055617    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:47:16.055704    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:16.066591    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:47:16.066687    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:16.077263    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:47:16.077345    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:16.087295    8967 logs.go:276] 0 containers: []
	W0914 23:47:16.087306    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:16.087375    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:16.097669    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:47:16.097687    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:47:16.097692    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:47:16.119161    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:16.119174    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:16.144081    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:16.144089    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:16.179548    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:16.179558    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:16.184278    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:47:16.184284    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:47:16.198708    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:47:16.198717    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:47:16.217251    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:47:16.217268    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:47:16.233981    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:47:16.233996    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:47:16.249968    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:47:16.249982    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:16.261810    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:16.261825    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:16.297051    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:47:16.297064    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:47:16.312173    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:47:16.312188    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:47:16.324359    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:47:16.324371    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:47:15.221432    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:18.839111    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:20.228374    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:20.228474    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:20.239274    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:47:20.239370    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:20.250846    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:47:20.250930    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:20.261865    8956 logs.go:276] 2 containers: [f166433f26b6 1f072a646288]
	I0914 23:47:20.261952    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:20.272159    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:47:20.272242    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:20.283065    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:47:20.283144    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:20.293382    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:47:20.293453    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:20.303506    8956 logs.go:276] 0 containers: []
	W0914 23:47:20.303517    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:20.303578    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:20.314340    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:47:20.314354    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:47:20.314359    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:47:20.331690    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:47:20.331699    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:47:20.342671    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:20.342681    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:20.373251    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:47:20.373259    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:47:20.387171    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:47:20.387184    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:47:20.404836    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:47:20.404846    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:47:20.416595    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:47:20.416610    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:47:20.432424    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:47:20.432438    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:47:20.443998    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:20.444012    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:20.468766    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:20.468773    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:20.472869    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:20.472874    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:20.507634    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:47:20.507644    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:47:20.519904    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:47:20.519913    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:23.035156    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:23.844342    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:23.844521    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:23.860258    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:47:23.860359    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:23.872398    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:47:23.872480    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:23.883251    8967 logs.go:276] 2 containers: [d3fe5f4d4d8c 761ee539a978]
	I0914 23:47:23.883333    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:23.894068    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:47:23.894154    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:23.904558    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:47:23.904644    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:23.916898    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:47:23.916981    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:23.927836    8967 logs.go:276] 0 containers: []
	W0914 23:47:23.927848    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:23.927920    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:23.938916    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:47:23.938931    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:47:23.938936    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:47:23.958921    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:47:23.958931    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:47:23.970841    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:47:23.970854    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:47:23.982692    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:47:23.982702    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:47:23.999887    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:23.999899    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:24.024361    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:47:24.024369    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:47:24.040874    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:24.040887    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:24.046041    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:24.046050    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:24.084121    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:47:24.084138    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:47:24.107882    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:47:24.107893    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:47:24.119985    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:47:24.119998    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:47:24.135736    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:47:24.135745    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:24.147758    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:24.147772    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:26.686242    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:28.040224    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:28.040379    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:28.055530    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:47:28.055628    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:28.066843    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:47:28.066930    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:28.077719    8956 logs.go:276] 2 containers: [f166433f26b6 1f072a646288]
	I0914 23:47:28.077802    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:28.088393    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:47:28.088475    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:28.099540    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:47:28.099630    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:28.110220    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:47:28.110300    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:28.119804    8956 logs.go:276] 0 containers: []
	W0914 23:47:28.119814    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:28.119877    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:28.132262    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:47:28.132279    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:47:28.132285    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:47:28.146297    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:47:28.146307    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:47:28.161091    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:28.161102    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:28.184889    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:28.184896    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:28.214592    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:28.214603    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:28.250194    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:47:28.250204    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:47:28.264217    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:47:28.264227    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:47:28.275632    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:47:28.275643    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:47:28.292959    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:47:28.292968    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:47:28.304359    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:47:28.304370    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:28.315871    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:28.315880    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:28.320726    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:47:28.320733    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:47:28.335477    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:47:28.335488    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:47:31.690781    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:31.690964    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:31.709344    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:47:31.709454    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:31.724705    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:47:31.724789    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:31.737146    8967 logs.go:276] 2 containers: [d3fe5f4d4d8c 761ee539a978]
	I0914 23:47:31.737238    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:31.747659    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:47:31.747740    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:31.761466    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:47:31.761550    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:31.772337    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:47:31.772420    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:31.782563    8967 logs.go:276] 0 containers: []
	W0914 23:47:31.782573    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:31.782645    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:31.792897    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:47:31.792911    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:31.792917    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:31.797971    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:47:31.797978    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:47:31.814093    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:47:31.814109    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:47:31.828672    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:47:31.828683    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:47:31.845697    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:31.845711    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:31.869150    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:47:31.869158    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:31.880272    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:47:31.880282    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:47:31.891583    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:31.891598    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:31.926880    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:31.926891    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:31.962538    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:47:31.962554    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:47:31.974903    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:47:31.974912    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:47:31.986583    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:47:31.986597    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:47:32.002308    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:47:32.002319    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:47:30.850837    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:34.516888    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:35.854936    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:35.855102    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:35.870319    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:47:35.870418    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:35.881290    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:47:35.881379    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:35.891692    8956 logs.go:276] 2 containers: [f166433f26b6 1f072a646288]
	I0914 23:47:35.891775    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:35.904132    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:47:35.904208    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:35.914889    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:47:35.914972    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:35.925345    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:47:35.925423    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:35.935737    8956 logs.go:276] 0 containers: []
	W0914 23:47:35.935754    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:35.935832    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:35.947001    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:47:35.947015    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:47:35.947020    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:47:35.962224    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:35.962240    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:35.966886    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:35.966893    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:36.022718    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:47:36.022730    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:47:36.037381    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:47:36.037392    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:47:36.055626    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:47:36.055643    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:47:36.067579    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:47:36.067593    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:47:36.082684    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:47:36.082695    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:47:36.100363    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:47:36.100374    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:36.112372    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:36.112382    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:36.144130    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:47:36.144143    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:47:36.155971    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:47:36.155981    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:47:36.169922    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:36.169934    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:38.697490    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:39.520566    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:39.520789    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:39.534751    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:47:39.534848    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:39.546059    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:47:39.546142    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:39.556458    8967 logs.go:276] 2 containers: [d3fe5f4d4d8c 761ee539a978]
	I0914 23:47:39.556540    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:39.568069    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:47:39.568144    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:39.578551    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:47:39.578628    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:39.592340    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:47:39.592414    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:39.602699    8967 logs.go:276] 0 containers: []
	W0914 23:47:39.602712    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:39.602777    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:39.617743    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:47:39.617759    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:39.617765    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:39.622724    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:47:39.622732    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:47:39.640447    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:47:39.640460    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:47:39.652530    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:47:39.652540    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:39.663799    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:39.663809    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:39.700286    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:39.700295    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:39.734892    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:47:39.734902    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:47:39.749326    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:47:39.749341    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:47:39.761070    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:47:39.761080    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:47:39.776895    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:47:39.776906    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:47:39.788633    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:47:39.788644    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:47:39.806337    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:47:39.806349    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:47:39.817853    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:39.817864    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:42.344935    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:43.700794    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:43.701005    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:43.718713    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:47:43.718837    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:43.735370    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:47:43.735465    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:43.748263    8956 logs.go:276] 2 containers: [f166433f26b6 1f072a646288]
	I0914 23:47:43.748348    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:43.759461    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:47:43.759539    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:47.345947    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:47.346093    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:47.357373    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:47:47.357451    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:47.368282    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:47:47.368367    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:47.378596    8967 logs.go:276] 2 containers: [d3fe5f4d4d8c 761ee539a978]
	I0914 23:47:47.378684    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:47.389303    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:47:47.389378    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:47.399802    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:47:47.399880    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:47.411192    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:47:47.411277    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:47.421767    8967 logs.go:276] 0 containers: []
	W0914 23:47:47.421778    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:47.421848    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:47.435952    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:47:47.435967    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:47:47.435972    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:47:47.450921    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:47:47.450934    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:47:47.463326    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:47.463341    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:47.498468    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:47.498475    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:47.532536    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:47:47.532547    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:47:47.544364    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:47:47.544376    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:47:47.560332    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:47:47.560343    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:47:47.571940    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:47:47.571951    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:47:47.593518    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:47:47.593528    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:47:47.606421    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:47.606429    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:47.630466    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:47.630475    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:47.634774    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:47:47.634783    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:47:47.648962    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:47:47.648972    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:43.777556    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:47:43.777643    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:43.788267    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:47:43.788350    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:43.798484    8956 logs.go:276] 0 containers: []
	W0914 23:47:43.798495    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:43.798568    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:43.813382    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:47:43.813399    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:47:43.813404    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:47:43.828775    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:47:43.828787    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:47:43.840385    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:47:43.840397    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:47:43.858864    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:43.858875    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:43.884461    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:43.884468    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:43.915771    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:43.915779    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:43.954537    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:47:43.954548    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:47:43.969790    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:47:43.969828    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:47:43.984254    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:47:43.984265    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:43.995822    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:43.995834    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:44.000699    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:47:44.000705    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:47:44.014151    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:47:44.014161    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:47:44.026053    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:47:44.026067    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:47:46.548669    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:50.163796    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:51.551579    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:51.551768    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:51.564272    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:47:51.564364    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:51.574851    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:47:51.574935    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:51.585371    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:47:51.585460    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:51.595793    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:47:51.595875    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:51.606044    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:47:51.606126    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:51.617318    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:47:51.617405    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:51.633021    8956 logs.go:276] 0 containers: []
	W0914 23:47:51.633032    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:51.633106    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:51.643944    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:47:51.643961    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:47:51.643966    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:47:51.658529    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:47:51.658539    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:47:51.670294    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:47:51.670307    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:51.682056    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:51.682065    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:51.686419    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:47:51.686429    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:47:51.701746    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:47:51.701760    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:47:51.713218    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:47:51.713229    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:47:51.724899    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:47:51.724913    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:47:51.744034    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:47:51.744042    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:47:51.760592    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:47:51.760602    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:47:51.772361    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:47:51.772375    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:47:51.789790    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:51.789799    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:51.820475    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:51.820483    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:51.856433    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:47:51.856447    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:47:51.868075    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:51.868089    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:55.166467    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:55.166626    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:55.179143    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:47:55.179233    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:55.197796    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:47:55.197877    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:55.208732    8967 logs.go:276] 2 containers: [d3fe5f4d4d8c 761ee539a978]
	I0914 23:47:55.208819    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:55.219515    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:47:55.219608    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:55.231317    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:47:55.231394    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:55.242592    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:47:55.242682    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:55.253523    8967 logs.go:276] 0 containers: []
	W0914 23:47:55.253537    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:55.253610    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:55.264733    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:47:55.264748    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:55.264754    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:55.270024    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:55.270031    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:55.306080    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:47:55.306091    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:47:55.320561    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:47:55.320571    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:47:55.332696    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:47:55.332707    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:47:55.350853    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:47:55.350864    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:47:55.362813    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:55.362823    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:55.399271    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:47:55.399280    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:47:55.413596    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:47:55.413607    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:47:55.425600    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:47:55.425611    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:47:55.437308    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:47:55.437320    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:47:55.453315    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:55.453325    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:55.477148    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:47:55.477157    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:54.394928    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:57.990159    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:59.397527    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:59.397730    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:59.419825    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:47:59.419922    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:59.431641    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:47:59.431724    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:59.442054    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:47:59.442144    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:59.452570    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:47:59.452653    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:59.463602    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:47:59.463681    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:59.473957    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:47:59.474026    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:59.485058    8956 logs.go:276] 0 containers: []
	W0914 23:47:59.485073    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:59.485149    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:59.495109    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:47:59.495129    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:47:59.495134    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:47:59.506232    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:47:59.506245    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:47:59.523635    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:59.523645    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:59.558572    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:47:59.558583    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:47:59.573040    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:47:59.573054    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:47:59.585032    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:47:59.585043    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:47:59.600388    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:59.600398    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:59.625433    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:59.625441    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:59.630225    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:47:59.630233    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:47:59.644506    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:47:59.644516    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:47:59.657440    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:47:59.657452    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:47:59.669448    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:59.669459    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:59.700607    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:47:59.700615    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:47:59.711971    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:47:59.711982    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:47:59.724237    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:47:59.724247    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:02.236229    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:02.992254    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:02.992417    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:03.003151    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:48:03.003239    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:03.014613    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:48:03.014692    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:03.025422    8967 logs.go:276] 2 containers: [d3fe5f4d4d8c 761ee539a978]
	I0914 23:48:03.025503    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:03.036039    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:48:03.036108    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:03.051989    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:48:03.052077    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:03.062564    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:48:03.062645    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:03.076692    8967 logs.go:276] 0 containers: []
	W0914 23:48:03.076704    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:03.076771    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:03.087537    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:48:03.087553    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:48:03.087559    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:03.099200    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:03.099210    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:03.134522    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:48:03.134538    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:48:03.149594    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:48:03.149607    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:48:03.162741    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:48:03.162752    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:48:03.175206    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:48:03.175222    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:48:03.187381    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:48:03.187397    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:48:03.206223    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:48:03.206233    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:48:03.218237    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:03.218251    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:03.253708    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:03.253717    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:03.258828    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:48:03.258836    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:48:03.272792    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:48:03.272805    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:48:03.288537    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:03.288548    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:05.816240    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:07.238781    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:07.238945    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:07.255381    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:48:07.255482    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:07.267871    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:48:07.267961    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:07.279006    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:48:07.279081    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:07.289453    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:48:07.289535    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:07.301450    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:48:07.301531    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:07.312089    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:48:07.312164    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:07.322675    8956 logs.go:276] 0 containers: []
	W0914 23:48:07.322687    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:07.322750    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:07.334888    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:48:07.334916    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:48:07.334923    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:48:07.350670    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:48:07.350685    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:07.362702    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:48:07.362714    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:48:07.384414    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:07.384424    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:07.419354    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:48:07.419363    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:48:07.431043    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:48:07.431054    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:48:07.443133    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:07.443145    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:07.468290    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:07.468299    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:07.498276    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:48:07.498283    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:48:07.509873    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:48:07.509886    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:48:07.521349    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:07.521359    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:07.525955    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:48:07.525961    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:48:07.541641    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:48:07.541654    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:48:07.553549    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:48:07.553560    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:48:07.571475    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:48:07.571485    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:48:10.818569    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:10.818765    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:10.833697    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:48:10.833802    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:10.845446    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:48:10.845516    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:10.856186    8967 logs.go:276] 2 containers: [d3fe5f4d4d8c 761ee539a978]
	I0914 23:48:10.856256    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:10.866672    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:48:10.866750    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:10.877656    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:48:10.877745    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:10.896322    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:48:10.896406    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:10.907087    8967 logs.go:276] 0 containers: []
	W0914 23:48:10.907098    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:10.907159    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:10.926239    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:48:10.926254    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:10.926259    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:10.951053    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:10.951061    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:10.988041    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:10.988054    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:10.992619    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:48:10.992627    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:48:11.006669    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:48:11.006679    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:48:11.018347    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:48:11.018362    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:48:11.033743    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:48:11.033752    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:48:11.044934    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:48:11.044943    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:48:11.062692    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:48:11.062706    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:11.074104    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:11.074113    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:11.110653    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:48:11.110663    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:48:11.125028    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:48:11.125039    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:48:11.137003    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:48:11.137014    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:48:10.087782    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:13.650488    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:15.090198    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:15.090322    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:15.102991    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:48:15.103081    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:15.113760    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:48:15.113841    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:15.124636    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:48:15.124723    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:15.135198    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:48:15.135280    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:15.146023    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:48:15.146112    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:15.156548    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:48:15.156634    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:15.166797    8956 logs.go:276] 0 containers: []
	W0914 23:48:15.166807    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:15.166872    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:15.177629    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:48:15.177651    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:48:15.177657    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:48:15.189448    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:48:15.189458    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:48:15.201439    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:15.201450    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:15.234079    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:48:15.234091    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:48:15.255369    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:48:15.255384    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:48:15.267098    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:15.267111    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:15.292566    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:48:15.292574    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:15.303968    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:48:15.303979    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:48:15.318252    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:15.318263    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:15.356903    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:48:15.356916    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:48:15.369095    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:15.369105    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:15.373704    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:48:15.373710    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:48:15.389938    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:48:15.389949    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:48:15.401560    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:48:15.401571    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:48:15.427073    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:48:15.427091    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:48:17.943073    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:18.652806    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:18.653028    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:18.670673    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:48:18.670764    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:18.684164    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:48:18.684261    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:18.695388    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:48:18.695478    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:18.705879    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:48:18.705953    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:18.716669    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:48:18.716750    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:18.727442    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:48:18.727521    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:18.737494    8967 logs.go:276] 0 containers: []
	W0914 23:48:18.737506    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:18.737583    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:18.749823    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:48:18.749839    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:48:18.749845    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:48:18.762290    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:48:18.762301    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:48:18.783754    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:48:18.783763    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:48:18.795661    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:48:18.795672    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:18.807277    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:48:18.807288    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:48:18.819512    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:48:18.819523    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:48:18.831268    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:48:18.831279    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:48:18.843326    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:18.843339    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:18.848077    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:18.848085    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:18.884903    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:48:18.884914    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:48:18.900319    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:48:18.900330    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:48:18.912322    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:18.912338    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:18.936120    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:18.936129    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:18.971241    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:48:18.971252    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:48:18.986109    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:48:18.986119    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:48:21.505481    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:22.943652    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:22.943877    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:22.961752    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:48:22.961846    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:22.972000    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:48:22.972081    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:22.982674    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:48:22.982761    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:22.993641    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:48:22.993720    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:23.016370    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:48:23.016452    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:23.031412    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:48:23.031495    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:23.041677    8956 logs.go:276] 0 containers: []
	W0914 23:48:23.041689    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:23.041757    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:23.051893    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:48:23.051909    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:48:23.051914    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:48:23.067955    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:23.067966    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:23.093547    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:23.093555    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:23.124680    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:23.124690    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:23.159823    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:23.159832    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:23.164135    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:48:23.164142    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:23.175565    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:48:23.175576    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:48:23.187656    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:48:23.187665    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:48:23.202838    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:48:23.202848    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:48:23.214440    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:48:23.214451    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:48:23.229321    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:48:23.229329    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:48:23.243856    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:48:23.243870    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:48:23.256302    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:48:23.256310    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:48:23.274186    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:48:23.274195    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:48:23.286016    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:48:23.286026    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:48:26.507893    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:26.508087    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:26.525423    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:48:26.525524    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:26.539581    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:48:26.539674    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:26.550986    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:48:26.551075    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:26.561685    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:48:26.561760    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:26.572820    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:48:26.572898    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:26.583133    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:48:26.583215    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:26.594095    8967 logs.go:276] 0 containers: []
	W0914 23:48:26.594112    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:26.594173    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:26.608401    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:48:26.608417    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:48:26.608423    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:48:26.626535    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:48:26.626546    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:48:26.641425    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:48:26.641438    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:48:26.653002    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:48:26.653015    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:48:26.668316    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:48:26.668326    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:48:26.679575    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:48:26.679585    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:48:26.695022    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:26.695033    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:26.718591    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:26.718599    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:26.755405    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:48:26.755413    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:48:26.773034    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:48:26.773043    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:48:26.790652    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:48:26.790662    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:26.803900    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:26.803913    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:26.808337    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:26.808346    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:26.859587    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:48:26.859599    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:48:26.874161    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:48:26.874176    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:48:25.800703    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:29.389702    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:30.803035    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:30.803219    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:30.819687    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:48:30.819799    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:30.833041    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:48:30.833133    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:30.844503    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:48:30.844580    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:30.854688    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:48:30.854776    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:30.865772    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:48:30.865855    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:30.876593    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:48:30.876679    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:30.887264    8956 logs.go:276] 0 containers: []
	W0914 23:48:30.887273    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:30.887336    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:30.897767    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:48:30.897785    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:30.897791    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:30.930729    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:30.930743    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:30.935523    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:30.935530    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:30.973015    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:48:30.973026    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:48:30.985123    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:48:30.985136    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:48:30.997382    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:30.997392    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:31.021697    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:48:31.021704    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:48:31.036031    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:48:31.036042    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:48:31.055314    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:48:31.055323    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:48:31.067237    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:48:31.067249    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:48:31.082036    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:48:31.082053    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:48:31.093745    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:48:31.093760    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:48:31.105192    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:48:31.105206    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:48:31.121299    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:48:31.121308    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:48:31.138202    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:48:31.138212    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:33.652150    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:34.391957    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:34.392062    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:34.407914    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:48:34.407999    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:34.418601    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:48:34.418685    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:34.429696    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:48:34.429781    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:34.440385    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:48:34.440459    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:34.451719    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:48:34.451805    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:34.462658    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:48:34.462742    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:34.473304    8967 logs.go:276] 0 containers: []
	W0914 23:48:34.473315    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:34.473377    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:34.483638    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:48:34.483655    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:34.483661    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:34.521120    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:34.521129    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:34.546164    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:48:34.546171    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:48:34.561230    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:48:34.561241    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:34.573131    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:34.573141    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:34.577982    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:34.577990    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:34.614022    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:48:34.614033    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:48:34.638616    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:48:34.638628    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:48:34.651210    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:48:34.651221    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:48:34.668384    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:48:34.668393    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:48:34.682784    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:48:34.682794    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:48:34.694094    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:48:34.694106    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:48:34.705563    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:48:34.705573    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:48:34.723580    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:48:34.723595    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:48:34.741593    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:48:34.741609    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:48:37.255093    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:38.654430    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:38.654658    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:38.673469    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:48:38.673582    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:38.688135    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:48:38.688221    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:38.700475    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:48:38.700557    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:38.710923    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:48:38.710991    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:38.721711    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:48:38.721798    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:38.732302    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:48:38.732377    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:38.742580    8956 logs.go:276] 0 containers: []
	W0914 23:48:38.742591    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:38.742660    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:38.753316    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:48:38.753333    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:48:38.753340    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:48:38.764717    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:48:38.764727    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:48:42.257365    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:42.257576    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:42.271499    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:48:42.271599    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:42.283124    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:48:42.283209    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:42.294279    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:48:42.294366    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:42.304948    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:48:42.305028    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:42.316334    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:48:42.316415    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:42.328846    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:48:42.328927    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:42.338987    8967 logs.go:276] 0 containers: []
	W0914 23:48:42.338997    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:42.339062    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:42.349624    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:48:42.349640    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:48:42.349645    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:48:42.362157    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:48:42.362167    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:48:42.374038    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:48:42.374048    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:48:42.391352    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:48:42.391361    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:48:42.407306    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:48:42.407316    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:42.419352    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:42.419361    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:42.454980    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:42.454993    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:42.459615    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:48:42.459621    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:48:42.474493    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:48:42.474504    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:48:42.486611    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:48:42.486622    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:48:42.503956    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:42.503966    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:42.527742    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:42.527749    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:42.562473    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:48:42.562482    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:48:42.575314    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:48:42.575328    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:48:42.586854    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:48:42.586864    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:48:38.781767    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:38.781781    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:38.806005    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:48:38.806014    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:48:38.817739    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:48:38.817754    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:38.834351    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:38.834366    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:38.841173    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:48:38.841183    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:48:38.855495    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:48:38.855506    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:48:38.867736    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:48:38.867746    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:48:38.887471    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:48:38.887484    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:48:38.900350    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:38.900362    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:38.932018    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:38.932033    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:38.969082    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:48:38.969092    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:48:38.980979    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:48:38.980989    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:48:38.995159    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:48:38.995172    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:48:41.510744    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:45.102659    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:46.512997    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:46.513160    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:46.524713    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:48:46.524801    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:46.535680    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:48:46.535770    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:46.546427    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:48:46.546511    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:46.557255    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:48:46.557338    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:46.568045    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:48:46.568125    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:46.579115    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:48:46.579190    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:46.589385    8956 logs.go:276] 0 containers: []
	W0914 23:48:46.589399    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:46.589462    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:46.600951    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:48:46.600968    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:48:46.600972    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:48:46.612533    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:48:46.612544    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:48:46.623883    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:48:46.623893    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:48:46.639579    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:48:46.639593    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:48:46.653767    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:48:46.653780    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:48:46.665272    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:46.665282    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:46.699459    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:46.699470    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:46.703705    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:48:46.703711    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:48:46.715197    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:48:46.715207    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:48:46.727325    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:48:46.727335    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:48:46.745181    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:46.745190    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:46.770703    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:46.770710    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:46.801948    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:48:46.801955    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:48:46.814205    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:48:46.814218    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:46.826408    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:48:46.826420    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:48:50.104974    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:50.105154    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:50.122616    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:48:50.122717    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:50.134959    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:48:50.135037    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:50.146307    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:48:50.146390    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:50.157346    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:48:50.157425    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:50.168354    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:48:50.168436    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:50.179183    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:48:50.179263    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:50.189717    8967 logs.go:276] 0 containers: []
	W0914 23:48:50.189728    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:50.189797    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:50.200473    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:48:50.200490    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:48:50.200496    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:48:50.221296    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:48:50.221306    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:48:50.232563    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:50.232573    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:50.257991    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:48:50.258010    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:50.271324    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:50.271339    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:50.275694    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:48:50.275700    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:48:50.287758    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:48:50.287770    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:48:50.299667    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:48:50.299676    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:48:50.317182    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:48:50.317193    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:48:50.332154    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:48:50.332164    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:48:50.345087    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:48:50.345098    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:48:50.357235    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:50.357246    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:50.392538    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:50.392546    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:50.427369    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:48:50.427381    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:48:50.441559    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:48:50.441570    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:48:52.953591    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:49.348395    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:57.955971    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:57.956195    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:54.350572    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:54.350809    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:54.371570    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:48:54.371690    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:54.386819    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:48:54.386918    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:54.399271    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:48:54.399352    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:54.410140    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:48:54.410219    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:54.420800    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:48:54.420875    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:54.431124    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:48:54.431205    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:54.441729    8956 logs.go:276] 0 containers: []
	W0914 23:48:54.441741    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:54.441805    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:54.453098    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:48:54.453115    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:48:54.453123    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:48:54.471932    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:48:54.471943    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:48:54.483329    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:48:54.483339    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:48:54.498296    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:54.498308    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:54.530458    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:54.530467    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:54.534962    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:48:54.534970    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:48:54.550807    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:48:54.550816    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:48:54.563052    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:48:54.563064    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:48:54.575436    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:54.575446    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:54.600605    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:48:54.600613    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:54.611906    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:54.611915    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:54.648189    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:48:54.648200    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:48:54.660134    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:48:54.660145    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:48:54.672779    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:48:54.672790    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:48:54.690984    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:48:54.690995    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:48:57.205279    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:57.983938    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:48:57.984030    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:57.996213    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:48:57.996300    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:58.008860    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:48:58.008965    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:58.019361    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:48:58.019441    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:58.029933    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:48:58.030007    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:58.040268    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:48:58.040359    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:58.051136    8967 logs.go:276] 0 containers: []
	W0914 23:48:58.051148    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:58.051210    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:58.063575    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:48:58.063592    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:58.063600    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:58.100688    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:58.100698    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:58.124345    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:48:58.124353    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:58.139579    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:48:58.139591    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:48:58.150911    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:58.150921    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:58.187419    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:58.187427    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:58.191948    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:48:58.191957    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:48:58.205951    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:48:58.205962    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:48:58.218424    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:48:58.218439    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:48:58.235981    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:48:58.235991    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:48:58.247878    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:48:58.247888    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:48:58.263727    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:48:58.263737    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:48:58.275371    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:48:58.275380    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:48:58.289297    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:48:58.289312    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:48:58.301414    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:48:58.301425    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:49:00.816010    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:02.207409    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:02.207541    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:02.225181    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:49:02.225276    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:02.236622    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:49:02.236707    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:02.247337    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:49:02.247429    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:02.258807    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:49:02.258887    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:02.276157    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:49:02.276236    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:02.286234    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:49:02.286310    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:02.296943    8956 logs.go:276] 0 containers: []
	W0914 23:49:02.296957    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:02.297033    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:02.307756    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:49:02.307773    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:02.307779    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:02.338026    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:02.338036    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:02.362061    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:49:02.362071    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:49:02.376954    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:49:02.376967    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:49:02.391084    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:49:02.391099    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:49:02.405772    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:49:02.405782    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:49:02.417196    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:49:02.417211    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:49:02.429052    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:02.429063    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:02.433509    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:02.433517    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:02.468363    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:49:02.468374    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:49:02.479952    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:49:02.479962    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:49:02.495347    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:49:02.495356    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:49:02.513049    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:49:02.513058    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:02.524683    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:49:02.524693    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:49:02.540935    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:49:02.540950    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:49:05.818340    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:05.818576    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:05.837963    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:49:05.838079    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:05.852255    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:49:05.852355    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:05.864907    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:49:05.864993    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:05.875185    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:49:05.875260    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:05.885846    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:49:05.885935    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:05.897219    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:49:05.897303    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:05.907869    8967 logs.go:276] 0 containers: []
	W0914 23:49:05.907879    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:05.907946    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:05.920577    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:49:05.920595    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:05.920602    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:05.962553    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:49:05.962564    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:49:05.977079    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:49:05.977091    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:49:05.992037    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:05.992047    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:05.996652    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:49:05.996659    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:49:06.012161    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:49:06.012174    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:06.027195    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:49:06.027205    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:49:06.039213    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:49:06.039224    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:49:06.051524    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:49:06.051538    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:49:06.071569    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:49:06.071582    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:49:06.090889    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:06.090900    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:06.114282    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:06.114290    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:06.148660    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:49:06.148671    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:49:06.171029    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:49:06.171043    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:49:06.182978    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:49:06.182988    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:49:05.054437    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:08.700638    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:10.056745    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:10.056861    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:10.068075    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:49:10.068163    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:10.080702    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:49:10.080787    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:10.092084    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:49:10.092166    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:10.102841    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:49:10.102923    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:10.113658    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:49:10.113742    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:10.124170    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:49:10.124244    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:10.134400    8956 logs.go:276] 0 containers: []
	W0914 23:49:10.134412    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:10.134480    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:10.148402    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:49:10.148418    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:49:10.148423    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:49:10.160113    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:49:10.160126    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:49:10.176609    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:49:10.176619    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:49:10.193577    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:49:10.193586    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:49:10.207811    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:49:10.207825    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:49:10.221857    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:10.221866    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:10.226761    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:49:10.226768    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:49:10.238904    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:49:10.238915    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:49:10.250951    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:10.250961    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:10.276443    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:49:10.276465    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:10.289427    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:49:10.289437    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:49:10.301382    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:49:10.301393    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:49:10.313484    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:49:10.313497    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:49:10.325784    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:10.325794    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:10.356671    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:10.356678    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:12.892081    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:13.702889    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:13.703108    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:13.723121    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:49:13.723238    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:13.739692    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:49:13.739792    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:13.752733    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:49:13.752818    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:13.763250    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:49:13.763331    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:13.774609    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:49:13.774694    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:13.784926    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:49:13.785010    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:13.795187    8967 logs.go:276] 0 containers: []
	W0914 23:49:13.795199    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:13.795270    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:13.805302    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:49:13.805318    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:49:13.805324    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:49:13.820227    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:49:13.820241    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:49:13.832861    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:13.832872    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:13.856697    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:13.856706    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:13.861296    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:49:13.861301    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:49:13.877400    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:49:13.877416    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:49:13.889856    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:49:13.889867    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:49:13.902024    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:49:13.902038    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:49:13.920010    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:49:13.920020    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:13.933509    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:49:13.933520    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:49:13.951969    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:13.951984    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:13.989086    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:49:13.989099    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:49:14.001721    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:49:14.001732    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:49:14.019653    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:49:14.019664    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:49:14.036689    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:14.036700    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:16.575488    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:17.894399    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:17.894636    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:17.911777    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:49:17.911884    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:17.925293    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:49:17.925384    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:17.936562    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:49:17.936641    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:17.947547    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:49:17.947630    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:17.960031    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:49:17.960113    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:17.970774    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:49:17.970860    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:17.981192    8956 logs.go:276] 0 containers: []
	W0914 23:49:17.981202    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:17.981270    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:17.992747    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:49:17.992763    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:17.992770    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:18.023707    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:49:18.023714    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:49:18.038289    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:49:18.038303    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:49:18.052599    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:49:18.052609    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:49:18.065851    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:18.065863    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:18.102071    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:49:18.102087    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:49:18.114445    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:49:18.114455    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:49:18.126768    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:49:18.126779    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:49:18.138818    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:49:18.138830    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:49:18.154919    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:49:18.154927    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:49:18.172863    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:18.172873    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:18.177076    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:49:18.177084    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:49:18.192093    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:49:18.192103    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:49:18.204249    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:18.204258    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:18.228689    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:49:18.228699    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:21.577721    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:21.577908    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:21.595366    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:49:21.595466    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:21.608982    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:49:21.609072    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:21.620746    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:49:21.620834    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:21.631449    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:49:21.631531    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:21.642247    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:49:21.642328    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:21.653088    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:49:21.653171    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:21.663108    8967 logs.go:276] 0 containers: []
	W0914 23:49:21.663119    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:21.663194    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:21.673479    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:49:21.673498    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:49:21.673504    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:49:21.690842    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:49:21.690856    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:49:21.702700    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:49:21.702713    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:49:21.719700    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:49:21.719714    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:49:21.737296    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:49:21.737305    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:21.749106    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:49:21.749116    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:49:21.766421    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:21.766434    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:21.789447    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:49:21.789456    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:49:21.801570    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:21.801583    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:21.806644    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:49:21.806652    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:49:21.819783    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:49:21.819794    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:49:21.835396    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:21.835409    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:21.871317    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:49:21.871325    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:49:21.882543    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:49:21.882553    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:49:21.894584    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:21.894593    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:20.743034    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:24.432939    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:25.745317    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:25.745562    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:25.763985    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:49:25.764093    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:25.779429    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:49:25.779508    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:25.803701    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:49:25.803795    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:25.816458    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:49:25.816547    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:25.832401    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:49:25.832481    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:25.842832    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:49:25.842918    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:25.854033    8956 logs.go:276] 0 containers: []
	W0914 23:49:25.854046    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:25.854119    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:25.864643    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:49:25.864659    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:49:25.864665    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:25.876599    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:25.876610    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:25.911692    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:49:25.911702    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:49:25.925932    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:49:25.925948    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:49:25.937430    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:49:25.937444    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:49:25.949533    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:49:25.949545    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:49:25.964004    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:49:25.964018    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:49:25.978427    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:49:25.978439    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:49:25.997146    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:25.997157    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:26.020556    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:26.020563    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:26.051674    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:49:26.051687    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:49:26.063922    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:49:26.063933    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:49:26.079843    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:26.079853    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:26.084095    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:49:26.084103    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:49:26.096152    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:49:26.096163    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:49:28.615137    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:29.433676    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:29.433873    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:29.448082    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:49:29.448184    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:29.465696    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:49:29.465779    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:29.477173    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:49:29.477253    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:29.488026    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:49:29.488117    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:29.498754    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:49:29.498827    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:29.508777    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:49:29.508849    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:29.518617    8967 logs.go:276] 0 containers: []
	W0914 23:49:29.518628    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:29.518705    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:29.529724    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:49:29.529743    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:29.529749    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:29.564630    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:29.564638    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:29.600065    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:49:29.600075    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:49:29.614552    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:49:29.614568    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:29.626332    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:49:29.626347    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:49:29.638935    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:49:29.638947    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:49:29.651175    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:49:29.651190    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:49:29.673879    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:49:29.673895    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:49:29.687804    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:49:29.687821    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:49:29.703557    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:29.703570    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:29.726607    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:49:29.726616    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:49:29.738626    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:49:29.738635    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:49:29.750336    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:29.750350    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:29.755012    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:49:29.755018    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:49:29.769627    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:49:29.769641    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:49:32.283481    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:33.616771    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:33.616932    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:33.640772    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:49:33.640866    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:33.655503    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:49:33.655592    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:33.666244    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:49:33.666332    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:33.676819    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:49:33.676900    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:33.687982    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:49:33.688062    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:33.702282    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:49:33.702360    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:33.712933    8956 logs.go:276] 0 containers: []
	W0914 23:49:33.712947    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:33.713016    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:33.723692    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:49:33.723710    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:33.723715    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:33.728338    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:33.728345    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:33.763391    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:49:33.763402    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:49:33.775602    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:49:33.775615    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:49:37.285492    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:37.285650    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:37.296481    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:49:37.296566    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:37.306922    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:49:37.307004    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:37.321664    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:49:37.321750    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:37.332588    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:49:37.332671    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:37.343708    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:49:37.343793    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:37.354601    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:49:37.354687    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:37.365309    8967 logs.go:276] 0 containers: []
	W0914 23:49:37.365326    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:37.365397    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:37.375915    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:49:37.375938    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:49:37.375944    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:49:37.391460    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:49:37.391470    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:49:37.409891    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:49:37.409903    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:49:37.424206    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:49:37.424215    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:49:37.435735    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:49:37.435750    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:49:37.449148    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:37.449158    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:37.484416    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:49:37.484432    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:49:37.498932    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:37.498942    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:37.503333    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:49:37.503340    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:49:37.515324    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:49:37.515335    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:49:37.530077    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:49:37.530086    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:49:37.542439    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:37.542450    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:37.566706    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:49:37.566715    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:37.578193    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:37.578208    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:37.614889    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:49:37.614899    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:49:33.788053    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:49:33.788065    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:49:33.799706    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:49:33.799715    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:49:33.811155    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:33.811169    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:33.841692    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:49:33.841700    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:49:33.855450    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:49:33.855461    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:49:33.867249    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:49:33.867260    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:33.878588    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:49:33.878600    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:49:33.893060    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:49:33.893075    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:49:33.905163    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:49:33.905172    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:49:33.928759    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:49:33.928770    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:49:33.950027    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:33.950037    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:36.478600    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:40.130527    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:41.480745    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:41.480857    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:41.492144    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:49:41.492223    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:41.502824    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:49:41.502907    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:41.513655    8956 logs.go:276] 4 containers: [5e1f1acf344a 28fb6b188a11 a1372db1fd0a e8c5d4d78795]
	I0914 23:49:41.513758    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:41.524028    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:49:41.524099    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:41.534551    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:49:41.534625    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:41.545663    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:49:41.545738    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:41.555541    8956 logs.go:276] 0 containers: []
	W0914 23:49:41.555553    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:41.555627    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:41.565556    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:49:41.565575    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:41.565581    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:41.570634    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:49:41.570642    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:49:41.585055    8956 logs.go:123] Gathering logs for coredns [5e1f1acf344a] ...
	I0914 23:49:41.585065    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1f1acf344a"
	I0914 23:49:41.596356    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:49:41.596370    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:49:41.607958    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:49:41.607969    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:49:41.625559    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:41.625569    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:41.657447    8956 logs.go:123] Gathering logs for coredns [28fb6b188a11] ...
	I0914 23:49:41.657456    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fb6b188a11"
	I0914 23:49:41.668837    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:49:41.668852    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:49:41.680809    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:41.680819    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:41.706031    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:41.706047    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:41.740901    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:49:41.740912    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:49:41.758515    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:49:41.758526    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:49:41.770560    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:49:41.770571    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:49:41.786617    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:49:41.786627    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:49:41.798467    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:49:41.798476    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:45.132720    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:45.132955    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:45.161586    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:49:45.161694    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:45.174722    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:49:45.174809    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:45.186323    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:49:45.186410    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:45.196735    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:49:45.196823    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:45.207317    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:49:45.207391    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:45.217575    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:49:45.217663    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:45.227934    8967 logs.go:276] 0 containers: []
	W0914 23:49:45.227944    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:45.228010    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:45.240721    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:49:45.240738    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:45.240744    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:45.245864    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:49:45.245873    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:49:45.258286    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:49:45.258297    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:45.270333    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:49:45.270344    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:49:45.282597    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:49:45.282609    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:49:45.304735    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:45.304746    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:45.350507    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:49:45.350522    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:49:45.364978    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:49:45.364991    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:49:45.377683    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:49:45.377695    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:49:45.393874    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:49:45.393887    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:49:45.406270    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:45.406281    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:45.442269    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:49:45.442280    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:49:45.456719    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:49:45.456732    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:49:45.469189    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:49:45.469200    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:49:45.481538    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:45.481553    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:44.312343    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:49.314621    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:49.318833    8956 out.go:201] 
	W0914 23:49:49.322926    8956 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0914 23:49:49.322931    8956 out.go:270] * 
	W0914 23:49:49.323348    8956 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:49:49.334922    8956 out.go:201] 
	I0914 23:49:48.007150    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:53.009435    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:53.009559    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:53.021185    8967 logs.go:276] 1 containers: [d9f9a206443a]
	I0914 23:49:53.021268    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:53.031534    8967 logs.go:276] 1 containers: [8f3790e702dc]
	I0914 23:49:53.031618    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:53.041925    8967 logs.go:276] 4 containers: [5311965e0333 353f1ef0a01d d3fe5f4d4d8c 761ee539a978]
	I0914 23:49:53.042000    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:53.052694    8967 logs.go:276] 1 containers: [25eed20117dc]
	I0914 23:49:53.052766    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:53.062925    8967 logs.go:276] 1 containers: [377d77febd41]
	I0914 23:49:53.063015    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:53.073635    8967 logs.go:276] 1 containers: [adfc8e96f969]
	I0914 23:49:53.073718    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:53.084064    8967 logs.go:276] 0 containers: []
	W0914 23:49:53.084077    8967 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:53.084148    8967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:53.100132    8967 logs.go:276] 1 containers: [3691a62ed727]
	I0914 23:49:53.100150    8967 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:53.100158    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:53.123665    8967 logs.go:123] Gathering logs for etcd [8f3790e702dc] ...
	I0914 23:49:53.123675    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f3790e702dc"
	I0914 23:49:53.137433    8967 logs.go:123] Gathering logs for coredns [5311965e0333] ...
	I0914 23:49:53.137444    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5311965e0333"
	I0914 23:49:53.149112    8967 logs.go:123] Gathering logs for kube-proxy [377d77febd41] ...
	I0914 23:49:53.149124    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 377d77febd41"
	I0914 23:49:53.161824    8967 logs.go:123] Gathering logs for container status ...
	I0914 23:49:53.161837    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:53.175268    8967 logs.go:123] Gathering logs for kube-controller-manager [adfc8e96f969] ...
	I0914 23:49:53.175279    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adfc8e96f969"
	I0914 23:49:53.195811    8967 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:53.195821    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:53.231669    8967 logs.go:123] Gathering logs for kube-apiserver [d9f9a206443a] ...
	I0914 23:49:53.231680    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9f9a206443a"
	I0914 23:49:53.246129    8967 logs.go:123] Gathering logs for coredns [353f1ef0a01d] ...
	I0914 23:49:53.246140    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 353f1ef0a01d"
	I0914 23:49:53.257707    8967 logs.go:123] Gathering logs for coredns [761ee539a978] ...
	I0914 23:49:53.257721    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761ee539a978"
	I0914 23:49:53.269514    8967 logs.go:123] Gathering logs for storage-provisioner [3691a62ed727] ...
	I0914 23:49:53.269529    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3691a62ed727"
	I0914 23:49:53.281872    8967 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:53.281883    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:53.317385    8967 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:53.317395    8967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:53.322471    8967 logs.go:123] Gathering logs for coredns [d3fe5f4d4d8c] ...
	I0914 23:49:53.322478    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3fe5f4d4d8c"
	I0914 23:49:53.334250    8967 logs.go:123] Gathering logs for kube-scheduler [25eed20117dc] ...
	I0914 23:49:53.334262    8967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25eed20117dc"
	I0914 23:49:55.857515    8967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:50:00.859708    8967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:50:00.864376    8967 out.go:201] 
	W0914 23:50:00.868189    8967 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0914 23:50:00.868199    8967 out.go:270] * 
	W0914 23:50:00.868930    8967 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:50:00.877330    8967 out.go:201] 
	
	
	==> Docker <==
	-- Journal begins at Sun 2024-09-15 06:40:51 UTC, ends at Sun 2024-09-15 06:50:17 UTC. --
	Sep 15 06:50:01 running-upgrade-386000 dockerd[4387]: time="2024-09-15T06:50:01.327352423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 15 06:50:01 running-upgrade-386000 dockerd[4387]: time="2024-09-15T06:50:01.327413421Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/a88a91f827cb321a1789fede241d77e064cddfa5c4ce0325ee9dc24c408a692b pid=20440 runtime=io.containerd.runc.v2
	Sep 15 06:50:01 running-upgrade-386000 cri-dockerd[4037]: time="2024-09-15T06:50:01Z" level=error msg="ContainerStats resp: {0x4000849500 linux}"
	Sep 15 06:50:01 running-upgrade-386000 cri-dockerd[4037]: time="2024-09-15T06:50:01Z" level=error msg="ContainerStats resp: {0x4000849c00 linux}"
	Sep 15 06:50:02 running-upgrade-386000 cri-dockerd[4037]: time="2024-09-15T06:50:02Z" level=error msg="ContainerStats resp: {0x40008abb40 linux}"
	Sep 15 06:50:03 running-upgrade-386000 cri-dockerd[4037]: time="2024-09-15T06:50:03Z" level=error msg="ContainerStats resp: {0x4000392b80 linux}"
	Sep 15 06:50:03 running-upgrade-386000 cri-dockerd[4037]: time="2024-09-15T06:50:03Z" level=error msg="ContainerStats resp: {0x4000392cc0 linux}"
	Sep 15 06:50:03 running-upgrade-386000 cri-dockerd[4037]: time="2024-09-15T06:50:03Z" level=error msg="ContainerStats resp: {0x4000392900 linux}"
	Sep 15 06:50:03 running-upgrade-386000 cri-dockerd[4037]: time="2024-09-15T06:50:03Z" level=error msg="ContainerStats resp: {0x4000392fc0 linux}"
	Sep 15 06:50:03 running-upgrade-386000 cri-dockerd[4037]: time="2024-09-15T06:50:03Z" level=error msg="ContainerStats resp: {0x4000a14e40 linux}"
	Sep 15 06:50:03 running-upgrade-386000 cri-dockerd[4037]: time="2024-09-15T06:50:03Z" level=error msg="ContainerStats resp: {0x4000393340 linux}"
	Sep 15 06:50:03 running-upgrade-386000 cri-dockerd[4037]: time="2024-09-15T06:50:03Z" level=error msg="ContainerStats resp: {0x40003937c0 linux}"
	Sep 15 06:50:05 running-upgrade-386000 cri-dockerd[4037]: time="2024-09-15T06:50:05Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 15 06:50:10 running-upgrade-386000 cri-dockerd[4037]: time="2024-09-15T06:50:10Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 15 06:50:13 running-upgrade-386000 cri-dockerd[4037]: time="2024-09-15T06:50:13Z" level=error msg="ContainerStats resp: {0x40009badc0 linux}"
	Sep 15 06:50:13 running-upgrade-386000 cri-dockerd[4037]: time="2024-09-15T06:50:13Z" level=error msg="ContainerStats resp: {0x40009bbf00 linux}"
	Sep 15 06:50:14 running-upgrade-386000 cri-dockerd[4037]: time="2024-09-15T06:50:14Z" level=error msg="ContainerStats resp: {0x40008abf80 linux}"
	Sep 15 06:50:15 running-upgrade-386000 cri-dockerd[4037]: time="2024-09-15T06:50:15Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 15 06:50:15 running-upgrade-386000 cri-dockerd[4037]: time="2024-09-15T06:50:15Z" level=error msg="ContainerStats resp: {0x4000392840 linux}"
	Sep 15 06:50:15 running-upgrade-386000 cri-dockerd[4037]: time="2024-09-15T06:50:15Z" level=error msg="ContainerStats resp: {0x4000392c40 linux}"
	Sep 15 06:50:15 running-upgrade-386000 cri-dockerd[4037]: time="2024-09-15T06:50:15Z" level=error msg="ContainerStats resp: {0x40000b9c00 linux}"
	Sep 15 06:50:15 running-upgrade-386000 cri-dockerd[4037]: time="2024-09-15T06:50:15Z" level=error msg="ContainerStats resp: {0x4000393a40 linux}"
	Sep 15 06:50:15 running-upgrade-386000 cri-dockerd[4037]: time="2024-09-15T06:50:15Z" level=error msg="ContainerStats resp: {0x4000848380 linux}"
	Sep 15 06:50:15 running-upgrade-386000 cri-dockerd[4037]: time="2024-09-15T06:50:15Z" level=error msg="ContainerStats resp: {0x400083a6c0 linux}"
	Sep 15 06:50:15 running-upgrade-386000 cri-dockerd[4037]: time="2024-09-15T06:50:15Z" level=error msg="ContainerStats resp: {0x4000848f00 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	a88a91f827cb3       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   8f9e92f628302
	38b9a41ae6397       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   994b2c749cb19
	5311965e03336       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   994b2c749cb19
	353f1ef0a01d1       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   8f9e92f628302
	377d77febd414       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   207d931cbbd49
	3691a62ed7277       ba04bb24b9575       4 minutes ago       Running             storage-provisioner       0                   69bf18946dbcb
	8f3790e702dcf       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   138292a834e7d
	25eed20117dcb       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   5dfabf54f2610
	d9f9a206443af       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   6475bf85937aa
	adfc8e96f969f       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   5aabd3a656e13
	
	
	==> coredns [353f1ef0a01d] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4011900789363897890.9049649276449422706. HINFO: read udp 10.244.0.3:55199->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4011900789363897890.9049649276449422706. HINFO: read udp 10.244.0.3:59449->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4011900789363897890.9049649276449422706. HINFO: read udp 10.244.0.3:55809->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4011900789363897890.9049649276449422706. HINFO: read udp 10.244.0.3:56893->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4011900789363897890.9049649276449422706. HINFO: read udp 10.244.0.3:44169->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4011900789363897890.9049649276449422706. HINFO: read udp 10.244.0.3:39835->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4011900789363897890.9049649276449422706. HINFO: read udp 10.244.0.3:59694->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4011900789363897890.9049649276449422706. HINFO: read udp 10.244.0.3:57945->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4011900789363897890.9049649276449422706. HINFO: read udp 10.244.0.3:45358->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4011900789363897890.9049649276449422706. HINFO: read udp 10.244.0.3:54734->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [38b9a41ae639] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3110500126666503389.984012129961847393. HINFO: read udp 10.244.0.2:60714->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3110500126666503389.984012129961847393. HINFO: read udp 10.244.0.2:45409->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3110500126666503389.984012129961847393. HINFO: read udp 10.244.0.2:37793->10.0.2.3:53: i/o timeout
	
	
	==> coredns [5311965e0333] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8770196399311886203.949477511034673805. HINFO: read udp 10.244.0.2:50317->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8770196399311886203.949477511034673805. HINFO: read udp 10.244.0.2:37183->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8770196399311886203.949477511034673805. HINFO: read udp 10.244.0.2:48793->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8770196399311886203.949477511034673805. HINFO: read udp 10.244.0.2:46405->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8770196399311886203.949477511034673805. HINFO: read udp 10.244.0.2:46595->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8770196399311886203.949477511034673805. HINFO: read udp 10.244.0.2:34025->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8770196399311886203.949477511034673805. HINFO: read udp 10.244.0.2:39489->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8770196399311886203.949477511034673805. HINFO: read udp 10.244.0.2:41853->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a88a91f827cb] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2157712956668487876.8788206063418498926. HINFO: read udp 10.244.0.3:46781->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2157712956668487876.8788206063418498926. HINFO: read udp 10.244.0.3:45276->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2157712956668487876.8788206063418498926. HINFO: read udp 10.244.0.3:38310->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-386000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-386000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=running-upgrade-386000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T23_46_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 06:45:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-386000
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 06:50:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 06:46:00 +0000   Sun, 15 Sep 2024 06:45:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 06:46:00 +0000   Sun, 15 Sep 2024 06:45:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 06:46:00 +0000   Sun, 15 Sep 2024 06:45:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 06:46:00 +0000   Sun, 15 Sep 2024 06:46:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-386000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 faf437c254e74cdc9784b80bb972f77a
	  System UUID:                faf437c254e74cdc9784b80bb972f77a
	  Boot ID:                    a21d6a8a-9cf8-4621-b252-2966f7430459
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-68r7n                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-gtddg                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-386000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-386000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-386000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-proxy-6cws2                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-scheduler-running-upgrade-386000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-386000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-386000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-386000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-386000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-386000 event: Registered Node running-upgrade-386000 in Controller
	
	
	==> dmesg <==
	[  +0.074071] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +1.152282] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.071174] systemd-fstab-generator[1049]: Ignoring "noauto" for root device
	[  +0.082453] systemd-fstab-generator[1060]: Ignoring "noauto" for root device
	[  +2.439849] systemd-fstab-generator[1289]: Ignoring "noauto" for root device
	[  +8.616570] systemd-fstab-generator[1939]: Ignoring "noauto" for root device
	[ +14.451703] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.679754] systemd-fstab-generator[2647]: Ignoring "noauto" for root device
	[  +0.163016] systemd-fstab-generator[2688]: Ignoring "noauto" for root device
	[  +0.109150] systemd-fstab-generator[2699]: Ignoring "noauto" for root device
	[  +0.107874] systemd-fstab-generator[2712]: Ignoring "noauto" for root device
	[  +5.179448] kauditd_printk_skb: 10 callbacks suppressed
	[  +2.485414] systemd-fstab-generator[3994]: Ignoring "noauto" for root device
	[  +0.086146] systemd-fstab-generator[4005]: Ignoring "noauto" for root device
	[  +0.082056] systemd-fstab-generator[4016]: Ignoring "noauto" for root device
	[  +0.108199] systemd-fstab-generator[4030]: Ignoring "noauto" for root device
	[  +2.620803] systemd-fstab-generator[4374]: Ignoring "noauto" for root device
	[  +1.213392] kauditd_printk_skb: 47 callbacks suppressed
	[  +1.123813] systemd-fstab-generator[4715]: Ignoring "noauto" for root device
	[  +1.208716] systemd-fstab-generator[4859]: Ignoring "noauto" for root device
	[  +3.196596] kauditd_printk_skb: 29 callbacks suppressed
	[Sep15 06:42] kauditd_printk_skb: 1 callbacks suppressed
	[Sep15 06:45] systemd-fstab-generator[13533]: Ignoring "noauto" for root device
	[  +5.635439] systemd-fstab-generator[14122]: Ignoring "noauto" for root device
	[  +0.455847] systemd-fstab-generator[14275]: Ignoring "noauto" for root device
	
	
	==> etcd [8f3790e702dc] <==
	{"level":"info","ts":"2024-09-15T06:45:55.462Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-15T06:45:55.462Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-15T06:45:55.462Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-15T06:45:55.480Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-15T06:45:55.480Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-09-15T06:45:55.480Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-15T06:45:55.481Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-15T06:45:55.835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-15T06:45:55.835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-15T06:45:55.835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-15T06:45:55.835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-15T06:45:55.835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-15T06:45:55.835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-15T06:45:55.835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-15T06:45:55.835Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:45:55.839Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:45:55.839Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:45:55.839Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:45:55.839Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-386000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-15T06:45:55.839Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T06:45:55.839Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T06:45:55.840Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-09-15T06:45:55.839Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-15T06:45:55.840Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-15T06:45:55.849Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 06:50:17 up 9 min,  0 users,  load average: 0.34, 0.40, 0.21
	Linux running-upgrade-386000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [d9f9a206443a] <==
	I0915 06:45:57.243250       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0915 06:45:57.273695       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0915 06:45:57.273745       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0915 06:45:57.273781       1 cache.go:39] Caches are synced for autoregister controller
	I0915 06:45:57.275261       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0915 06:45:57.275594       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0915 06:45:57.276086       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0915 06:45:57.998220       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0915 06:45:58.176452       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0915 06:45:58.178009       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0915 06:45:58.178021       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0915 06:45:58.327883       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0915 06:45:58.341394       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0915 06:45:58.439213       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0915 06:45:58.441138       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0915 06:45:58.441467       1 controller.go:611] quota admission added evaluator for: endpoints
	I0915 06:45:58.442727       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0915 06:45:59.307923       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0915 06:46:00.049321       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0915 06:46:00.052085       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0915 06:46:00.062745       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0915 06:46:00.103866       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0915 06:46:12.912155       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0915 06:46:13.062763       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0915 06:46:13.404527       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [adfc8e96f969] <==
	I0915 06:46:12.761825       1 shared_informer.go:262] Caches are synced for daemon sets
	I0915 06:46:12.808309       1 shared_informer.go:262] Caches are synced for persistent volume
	I0915 06:46:12.808407       1 shared_informer.go:262] Caches are synced for HPA
	I0915 06:46:12.808307       1 shared_informer.go:262] Caches are synced for taint
	I0915 06:46:12.808450       1 shared_informer.go:262] Caches are synced for GC
	I0915 06:46:12.808482       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0915 06:46:12.808507       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-386000. Assuming now as a timestamp.
	I0915 06:46:12.808548       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0915 06:46:12.808484       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0915 06:46:12.808766       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0915 06:46:12.808900       1 event.go:294] "Event occurred" object="running-upgrade-386000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-386000 event: Registered Node running-upgrade-386000 in Controller"
	I0915 06:46:12.811519       1 shared_informer.go:262] Caches are synced for PVC protection
	I0915 06:46:12.859913       1 shared_informer.go:262] Caches are synced for deployment
	I0915 06:46:12.861230       1 shared_informer.go:262] Caches are synced for resource quota
	I0915 06:46:12.862699       1 shared_informer.go:262] Caches are synced for resource quota
	I0915 06:46:12.907202       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0915 06:46:12.908356       1 shared_informer.go:262] Caches are synced for disruption
	I0915 06:46:12.908362       1 disruption.go:371] Sending events to api server.
	I0915 06:46:12.915049       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6cws2"
	I0915 06:46:13.064652       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0915 06:46:13.166717       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-68r7n"
	I0915 06:46:13.171773       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-gtddg"
	I0915 06:46:13.275490       1 shared_informer.go:262] Caches are synced for garbage collector
	I0915 06:46:13.317100       1 shared_informer.go:262] Caches are synced for garbage collector
	I0915 06:46:13.317110       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [377d77febd41] <==
	I0915 06:46:13.393746       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0915 06:46:13.393788       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0915 06:46:13.393800       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0915 06:46:13.402534       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0915 06:46:13.402544       1 server_others.go:206] "Using iptables Proxier"
	I0915 06:46:13.402557       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0915 06:46:13.402697       1 server.go:661] "Version info" version="v1.24.1"
	I0915 06:46:13.402745       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:46:13.403026       1 config.go:317] "Starting service config controller"
	I0915 06:46:13.403038       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0915 06:46:13.403046       1 config.go:226] "Starting endpoint slice config controller"
	I0915 06:46:13.403065       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0915 06:46:13.403395       1 config.go:444] "Starting node config controller"
	I0915 06:46:13.403423       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0915 06:46:13.503567       1 shared_informer.go:262] Caches are synced for node config
	I0915 06:46:13.503577       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0915 06:46:13.503584       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [25eed20117dc] <==
	W0915 06:45:57.233736       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0915 06:45:57.233772       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0915 06:45:57.233822       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0915 06:45:57.233842       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0915 06:45:57.233891       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 06:45:57.233922       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0915 06:45:57.234954       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 06:45:57.234965       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0915 06:45:58.039648       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 06:45:58.039730       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0915 06:45:58.049098       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 06:45:58.049140       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0915 06:45:58.085135       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 06:45:58.085164       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0915 06:45:58.098557       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0915 06:45:58.098598       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0915 06:45:58.161231       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 06:45:58.161248       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0915 06:45:58.197935       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 06:45:58.197956       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0915 06:45:58.211552       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 06:45:58.211568       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0915 06:45:58.216030       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 06:45:58.216041       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0915 06:45:58.531325       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Sun 2024-09-15 06:40:51 UTC, ends at Sun 2024-09-15 06:50:17 UTC. --
	Sep 15 06:46:01 running-upgrade-386000 kubelet[14128]: I0915 06:46:01.310990   14128 reconciler.go:157] "Reconciler: start to sync state"
	Sep 15 06:46:01 running-upgrade-386000 kubelet[14128]: E0915 06:46:01.685445   14128 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-386000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-386000"
	Sep 15 06:46:01 running-upgrade-386000 kubelet[14128]: E0915 06:46:01.884422   14128 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-386000\" already exists" pod="kube-system/etcd-running-upgrade-386000"
	Sep 15 06:46:02 running-upgrade-386000 kubelet[14128]: E0915 06:46:02.084521   14128 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-386000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-386000"
	Sep 15 06:46:02 running-upgrade-386000 kubelet[14128]: I0915 06:46:02.283563   14128 request.go:601] Waited for 1.130061377s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Sep 15 06:46:02 running-upgrade-386000 kubelet[14128]: E0915 06:46:02.286180   14128 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-386000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-386000"
	Sep 15 06:46:12 running-upgrade-386000 kubelet[14128]: I0915 06:46:12.681041   14128 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 15 06:46:12 running-upgrade-386000 kubelet[14128]: I0915 06:46:12.681372   14128 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 15 06:46:12 running-upgrade-386000 kubelet[14128]: I0915 06:46:12.815806   14128 topology_manager.go:200] "Topology Admit Handler"
	Sep 15 06:46:12 running-upgrade-386000 kubelet[14128]: I0915 06:46:12.918169   14128 topology_manager.go:200] "Topology Admit Handler"
	Sep 15 06:46:12 running-upgrade-386000 kubelet[14128]: I0915 06:46:12.984567   14128 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9b5d2aef-b048-48ca-87af-4e52beeed846-tmp\") pod \"storage-provisioner\" (UID: \"9b5d2aef-b048-48ca-87af-4e52beeed846\") " pod="kube-system/storage-provisioner"
	Sep 15 06:46:12 running-upgrade-386000 kubelet[14128]: I0915 06:46:12.984592   14128 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnmkt\" (UniqueName: \"kubernetes.io/projected/9b5d2aef-b048-48ca-87af-4e52beeed846-kube-api-access-tnmkt\") pod \"storage-provisioner\" (UID: \"9b5d2aef-b048-48ca-87af-4e52beeed846\") " pod="kube-system/storage-provisioner"
	Sep 15 06:46:12 running-upgrade-386000 kubelet[14128]: I0915 06:46:12.984603   14128 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1dcc2bc1-5f8f-448c-9c10-8366f5c529fc-lib-modules\") pod \"kube-proxy-6cws2\" (UID: \"1dcc2bc1-5f8f-448c-9c10-8366f5c529fc\") " pod="kube-system/kube-proxy-6cws2"
	Sep 15 06:46:12 running-upgrade-386000 kubelet[14128]: I0915 06:46:12.984615   14128 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69qlw\" (UniqueName: \"kubernetes.io/projected/1dcc2bc1-5f8f-448c-9c10-8366f5c529fc-kube-api-access-69qlw\") pod \"kube-proxy-6cws2\" (UID: \"1dcc2bc1-5f8f-448c-9c10-8366f5c529fc\") " pod="kube-system/kube-proxy-6cws2"
	Sep 15 06:46:12 running-upgrade-386000 kubelet[14128]: I0915 06:46:12.984624   14128 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1dcc2bc1-5f8f-448c-9c10-8366f5c529fc-kube-proxy\") pod \"kube-proxy-6cws2\" (UID: \"1dcc2bc1-5f8f-448c-9c10-8366f5c529fc\") " pod="kube-system/kube-proxy-6cws2"
	Sep 15 06:46:12 running-upgrade-386000 kubelet[14128]: I0915 06:46:12.984634   14128 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1dcc2bc1-5f8f-448c-9c10-8366f5c529fc-xtables-lock\") pod \"kube-proxy-6cws2\" (UID: \"1dcc2bc1-5f8f-448c-9c10-8366f5c529fc\") " pod="kube-system/kube-proxy-6cws2"
	Sep 15 06:46:13 running-upgrade-386000 kubelet[14128]: I0915 06:46:13.171118   14128 topology_manager.go:200] "Topology Admit Handler"
	Sep 15 06:46:13 running-upgrade-386000 kubelet[14128]: I0915 06:46:13.184098   14128 topology_manager.go:200] "Topology Admit Handler"
	Sep 15 06:46:13 running-upgrade-386000 kubelet[14128]: I0915 06:46:13.186058   14128 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2559cc78-3752-4ec2-acb6-e738e3744ff0-config-volume\") pod \"coredns-6d4b75cb6d-68r7n\" (UID: \"2559cc78-3752-4ec2-acb6-e738e3744ff0\") " pod="kube-system/coredns-6d4b75cb6d-68r7n"
	Sep 15 06:46:13 running-upgrade-386000 kubelet[14128]: I0915 06:46:13.186145   14128 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b02155b4-1182-4d6b-bf55-958a7de7ea99-config-volume\") pod \"coredns-6d4b75cb6d-gtddg\" (UID: \"b02155b4-1182-4d6b-bf55-958a7de7ea99\") " pod="kube-system/coredns-6d4b75cb6d-gtddg"
	Sep 15 06:46:13 running-upgrade-386000 kubelet[14128]: I0915 06:46:13.186184   14128 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvl7f\" (UniqueName: \"kubernetes.io/projected/b02155b4-1182-4d6b-bf55-958a7de7ea99-kube-api-access-mvl7f\") pod \"coredns-6d4b75cb6d-gtddg\" (UID: \"b02155b4-1182-4d6b-bf55-958a7de7ea99\") " pod="kube-system/coredns-6d4b75cb6d-gtddg"
	Sep 15 06:46:13 running-upgrade-386000 kubelet[14128]: I0915 06:46:13.186218   14128 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhbf8\" (UniqueName: \"kubernetes.io/projected/2559cc78-3752-4ec2-acb6-e738e3744ff0-kube-api-access-qhbf8\") pod \"coredns-6d4b75cb6d-68r7n\" (UID: \"2559cc78-3752-4ec2-acb6-e738e3744ff0\") " pod="kube-system/coredns-6d4b75cb6d-68r7n"
	Sep 15 06:46:13 running-upgrade-386000 kubelet[14128]: I0915 06:46:13.275360   14128 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="69bf18946dbcbb00fc89143aeba15ff3137c3a1bff5459900190cdf8ccd647fe"
	Sep 15 06:50:01 running-upgrade-386000 kubelet[14128]: I0915 06:50:01.372743   14128 scope.go:110] "RemoveContainer" containerID="761ee539a97853792067dd8319d4aab30dd2812f35152b4be2e9a4e6a40ce0d9"
	Sep 15 06:50:01 running-upgrade-386000 kubelet[14128]: I0915 06:50:01.387979   14128 scope.go:110] "RemoveContainer" containerID="d3fe5f4d4d8ce5c08ddbe5463de6afa3c39d956dc3748c3377b2e769561bed68"
	
	
	==> storage-provisioner [3691a62ed727] <==
	I0915 06:46:13.356687       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 06:46:13.361446       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 06:46:13.361523       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 06:46:13.364957       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 06:46:13.365082       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-386000_ac6d10ed-2a3c-4a65-889b-006640bfe32c!
	I0915 06:46:13.365489       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"15f45dd0-bed9-47c9-9b83-577d3c3788bc", APIVersion:"v1", ResourceVersion:"361", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-386000_ac6d10ed-2a3c-4a65-889b-006640bfe32c became leader
	I0915 06:46:13.465830       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-386000_ac6d10ed-2a3c-4a65-889b-006640bfe32c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-386000 -n running-upgrade-386000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-386000 -n running-upgrade-386000: exit status 2 (15.689872167s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-386000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-386000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-386000
--- FAIL: TestRunningBinaryUpgrade (619.57s)

                                                
                                    
x
+
TestKubernetesUpgrade (19.44s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-838000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-838000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.221893834s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-838000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-838000" primary control-plane node in "kubernetes-upgrade-838000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:39:54.617856    8866 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:39:54.617996    8866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:39:54.617999    8866 out.go:358] Setting ErrFile to fd 2...
	I0914 23:39:54.618002    8866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:39:54.618135    8866 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:39:54.619263    8866 out.go:352] Setting JSON to false
	I0914 23:39:54.635198    8866 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5963,"bootTime":1726376431,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:39:54.635263    8866 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:39:54.639948    8866 out.go:177] * [kubernetes-upgrade-838000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:39:54.648000    8866 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:39:54.648046    8866 notify.go:220] Checking for updates...
	I0914 23:39:54.654916    8866 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:39:54.656224    8866 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:39:54.658928    8866 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:39:54.661892    8866 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:39:54.664923    8866 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:39:54.668216    8866 config.go:182] Loaded profile config "multinode-053000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:39:54.668278    8866 config.go:182] Loaded profile config "offline-docker-506000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:39:54.668324    8866 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:39:54.671905    8866 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 23:39:54.678875    8866 start.go:297] selected driver: qemu2
	I0914 23:39:54.678883    8866 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:39:54.678890    8866 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:39:54.681045    8866 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:39:54.684900    8866 out.go:177] * Automatically selected the socket_vmnet network
	I0914 23:39:54.687993    8866 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 23:39:54.688007    8866 cni.go:84] Creating CNI manager for ""
	I0914 23:39:54.688030    8866 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0914 23:39:54.688060    8866 start.go:340] cluster config:
	{Name:kubernetes-upgrade-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:39:54.691823    8866 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:39:54.699942    8866 out.go:177] * Starting "kubernetes-upgrade-838000" primary control-plane node in "kubernetes-upgrade-838000" cluster
	I0914 23:39:54.706893    8866 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0914 23:39:54.706910    8866 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0914 23:39:54.706922    8866 cache.go:56] Caching tarball of preloaded images
	I0914 23:39:54.706993    8866 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:39:54.706998    8866 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0914 23:39:54.707068    8866 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/kubernetes-upgrade-838000/config.json ...
	I0914 23:39:54.707079    8866 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/kubernetes-upgrade-838000/config.json: {Name:mkc4b9c9c7d84053e5bfbf914e8335315614ad0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:39:54.707439    8866 start.go:360] acquireMachinesLock for kubernetes-upgrade-838000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:39:54.791296    8866 start.go:364] duration metric: took 83.839334ms to acquireMachinesLock for "kubernetes-upgrade-838000"
	I0914 23:39:54.791323    8866 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:39:54.791391    8866 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:39:54.794832    8866 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 23:39:54.822615    8866 start.go:159] libmachine.API.Create for "kubernetes-upgrade-838000" (driver="qemu2")
	I0914 23:39:54.822646    8866 client.go:168] LocalClient.Create starting
	I0914 23:39:54.822727    8866 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:39:54.822771    8866 main.go:141] libmachine: Decoding PEM data...
	I0914 23:39:54.822785    8866 main.go:141] libmachine: Parsing certificate...
	I0914 23:39:54.822834    8866 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:39:54.822871    8866 main.go:141] libmachine: Decoding PEM data...
	I0914 23:39:54.822882    8866 main.go:141] libmachine: Parsing certificate...
	I0914 23:39:54.825970    8866 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:39:55.098459    8866 main.go:141] libmachine: Creating SSH key...
	I0914 23:39:55.223268    8866 main.go:141] libmachine: Creating Disk image...
	I0914 23:39:55.223273    8866 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:39:55.223485    8866 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2
	I0914 23:39:55.233646    8866 main.go:141] libmachine: STDOUT: 
	I0914 23:39:55.233661    8866 main.go:141] libmachine: STDERR: 
	I0914 23:39:55.233712    8866 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2 +20000M
	I0914 23:39:55.241665    8866 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:39:55.241679    8866 main.go:141] libmachine: STDERR: 
	I0914 23:39:55.241690    8866 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2
	I0914 23:39:55.241701    8866 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:39:55.241714    8866 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:39:55.241740    8866 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:6c:15:8d:84:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2
	I0914 23:39:55.243324    8866 main.go:141] libmachine: STDOUT: 
	I0914 23:39:55.243338    8866 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:39:55.243360    8866 client.go:171] duration metric: took 420.713458ms to LocalClient.Create
	I0914 23:39:57.245484    8866 start.go:128] duration metric: took 2.454115625s to createHost
	I0914 23:39:57.245561    8866 start.go:83] releasing machines lock for "kubernetes-upgrade-838000", held for 2.454296916s
	W0914 23:39:57.245657    8866 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:39:57.260914    8866 out.go:177] * Deleting "kubernetes-upgrade-838000" in qemu2 ...
	W0914 23:39:57.300503    8866 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:39:57.300534    8866 start.go:729] Will try again in 5 seconds ...
	I0914 23:40:02.302708    8866 start.go:360] acquireMachinesLock for kubernetes-upgrade-838000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:40:02.302964    8866 start.go:364] duration metric: took 182.542µs to acquireMachinesLock for "kubernetes-upgrade-838000"
	I0914 23:40:02.303007    8866 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:40:02.303144    8866 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:40:02.310234    8866 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 23:40:02.341430    8866 start.go:159] libmachine.API.Create for "kubernetes-upgrade-838000" (driver="qemu2")
	I0914 23:40:02.341476    8866 client.go:168] LocalClient.Create starting
	I0914 23:40:02.341541    8866 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:40:02.341594    8866 main.go:141] libmachine: Decoding PEM data...
	I0914 23:40:02.341607    8866 main.go:141] libmachine: Parsing certificate...
	I0914 23:40:02.341657    8866 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:40:02.341689    8866 main.go:141] libmachine: Decoding PEM data...
	I0914 23:40:02.341705    8866 main.go:141] libmachine: Parsing certificate...
	I0914 23:40:02.342208    8866 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:40:02.550873    8866 main.go:141] libmachine: Creating SSH key...
	I0914 23:40:02.753197    8866 main.go:141] libmachine: Creating Disk image...
	I0914 23:40:02.753207    8866 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:40:02.753422    8866 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2
	I0914 23:40:02.763307    8866 main.go:141] libmachine: STDOUT: 
	I0914 23:40:02.763330    8866 main.go:141] libmachine: STDERR: 
	I0914 23:40:02.763398    8866 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2 +20000M
	I0914 23:40:02.771778    8866 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:40:02.771869    8866 main.go:141] libmachine: STDERR: 
	I0914 23:40:02.771885    8866 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2
	I0914 23:40:02.771893    8866 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:40:02.771903    8866 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:40:02.771939    8866 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:bc:ef:2d:8f:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2
	I0914 23:40:02.773648    8866 main.go:141] libmachine: STDOUT: 
	I0914 23:40:02.773661    8866 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:40:02.773675    8866 client.go:171] duration metric: took 432.203208ms to LocalClient.Create
	I0914 23:40:04.775830    8866 start.go:128] duration metric: took 2.472699875s to createHost
	I0914 23:40:04.775916    8866 start.go:83] releasing machines lock for "kubernetes-upgrade-838000", held for 2.472979958s
	W0914 23:40:04.776309    8866 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:40:04.783851    8866 out.go:201] 
	W0914 23:40:04.789185    8866 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:40:04.789244    8866 out.go:270] * 
	* 
	W0914 23:40:04.791771    8866 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:40:04.800058    8866 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-838000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-838000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-838000: (3.766661125s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-838000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-838000 status --format={{.Host}}: exit status 7 (67.8355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-838000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-838000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.202958958s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-838000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-838000" primary control-plane node in "kubernetes-upgrade-838000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-838000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-838000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:40:08.681172    8919 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:40:08.681298    8919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:40:08.681301    8919 out.go:358] Setting ErrFile to fd 2...
	I0914 23:40:08.681303    8919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:40:08.681443    8919 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:40:08.682473    8919 out.go:352] Setting JSON to false
	I0914 23:40:08.698552    8919 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5977,"bootTime":1726376431,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:40:08.698624    8919 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:40:08.702779    8919 out.go:177] * [kubernetes-upgrade-838000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:40:08.709665    8919 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:40:08.709713    8919 notify.go:220] Checking for updates...
	I0914 23:40:08.717736    8919 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:40:08.721703    8919 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:40:08.728710    8919 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:40:08.735820    8919 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:40:08.745667    8919 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:40:08.749981    8919 config.go:182] Loaded profile config "kubernetes-upgrade-838000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0914 23:40:08.750205    8919 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:40:08.752591    8919 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 23:40:08.757697    8919 start.go:297] selected driver: qemu2
	I0914 23:40:08.757702    8919 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:40:08.757749    8919 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:40:08.760141    8919 cni.go:84] Creating CNI manager for ""
	I0914 23:40:08.760173    8919 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:40:08.760195    8919 start.go:340] cluster config:
	{Name:kubernetes-upgrade-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-838000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:40:08.763850    8919 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:40:08.771690    8919 out.go:177] * Starting "kubernetes-upgrade-838000" primary control-plane node in "kubernetes-upgrade-838000" cluster
	I0914 23:40:08.774667    8919 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:40:08.774680    8919 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:40:08.774693    8919 cache.go:56] Caching tarball of preloaded images
	I0914 23:40:08.774748    8919 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:40:08.774755    8919 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:40:08.774825    8919 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/kubernetes-upgrade-838000/config.json ...
	I0914 23:40:08.775106    8919 start.go:360] acquireMachinesLock for kubernetes-upgrade-838000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:40:08.775139    8919 start.go:364] duration metric: took 26.208µs to acquireMachinesLock for "kubernetes-upgrade-838000"
	I0914 23:40:08.775147    8919 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:40:08.775152    8919 fix.go:54] fixHost starting: 
	I0914 23:40:08.775260    8919 fix.go:112] recreateIfNeeded on kubernetes-upgrade-838000: state=Stopped err=<nil>
	W0914 23:40:08.775268    8919 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:40:08.778693    8919 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-838000" ...
	I0914 23:40:08.785678    8919 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:40:08.785710    8919 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:bc:ef:2d:8f:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2
	I0914 23:40:08.787695    8919 main.go:141] libmachine: STDOUT: 
	I0914 23:40:08.787712    8919 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:40:08.787745    8919 fix.go:56] duration metric: took 12.592875ms for fixHost
	I0914 23:40:08.787750    8919 start.go:83] releasing machines lock for "kubernetes-upgrade-838000", held for 12.607209ms
	W0914 23:40:08.787755    8919 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:40:08.787795    8919 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:40:08.787799    8919 start.go:729] Will try again in 5 seconds ...
	I0914 23:40:13.788594    8919 start.go:360] acquireMachinesLock for kubernetes-upgrade-838000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:40:13.789119    8919 start.go:364] duration metric: took 404.334µs to acquireMachinesLock for "kubernetes-upgrade-838000"
	I0914 23:40:13.789273    8919 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:40:13.789292    8919 fix.go:54] fixHost starting: 
	I0914 23:40:13.789993    8919 fix.go:112] recreateIfNeeded on kubernetes-upgrade-838000: state=Stopped err=<nil>
	W0914 23:40:13.790021    8919 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:40:13.801632    8919 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-838000" ...
	I0914 23:40:13.806654    8919 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:40:13.806886    8919 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:bc:ef:2d:8f:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2
	I0914 23:40:13.816598    8919 main.go:141] libmachine: STDOUT: 
	I0914 23:40:13.816681    8919 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:40:13.816817    8919 fix.go:56] duration metric: took 27.523083ms for fixHost
	I0914 23:40:13.816847    8919 start.go:83] releasing machines lock for "kubernetes-upgrade-838000", held for 27.699916ms
	W0914 23:40:13.817165    8919 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-838000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-838000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:40:13.825558    8919 out.go:201] 
	W0914 23:40:13.829773    8919 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:40:13.829844    8919 out.go:270] * 
	* 
	W0914 23:40:13.831764    8919 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:40:13.839600    8919 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-838000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-838000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-838000 version --output=json: exit status 1 (61.22625ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-838000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-14 23:40:13.915967 -0700 PDT m=+689.108174209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-838000 -n kubernetes-upgrade-838000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-838000 -n kubernetes-upgrade-838000: exit status 7 (34.923208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-838000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-838000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-838000
--- FAIL: TestKubernetesUpgrade (19.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (585.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.395255541 start -p stopped-upgrade-438000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.395255541 start -p stopped-upgrade-438000 --memory=2200 --vm-driver=qemu2 : (53.209620375s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.395255541 -p stopped-upgrade-438000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.395255541 -p stopped-upgrade-438000 stop: (12.092424042s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-438000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-438000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m40.612929167s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-438000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-438000" primary control-plane node in "stopped-upgrade-438000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-438000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:41:08.760502    8956 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:41:08.760685    8956 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:41:08.760693    8956 out.go:358] Setting ErrFile to fd 2...
	I0914 23:41:08.760696    8956 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:41:08.760821    8956 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:41:08.761970    8956 out.go:352] Setting JSON to false
	I0914 23:41:08.780578    8956 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6037,"bootTime":1726376431,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:41:08.780651    8956 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:41:08.784766    8956 out.go:177] * [stopped-upgrade-438000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:41:08.791673    8956 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:41:08.791746    8956 notify.go:220] Checking for updates...
	I0914 23:41:08.799749    8956 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:41:08.802719    8956 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:41:08.805751    8956 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:41:08.808785    8956 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:41:08.811768    8956 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:41:08.815219    8956 config.go:182] Loaded profile config "stopped-upgrade-438000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 23:41:08.818724    8956 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0914 23:41:08.821725    8956 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:41:08.824725    8956 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 23:41:08.830697    8956 start.go:297] selected driver: qemu2
	I0914 23:41:08.830704    8956 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-438000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51261 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-438000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0914 23:41:08.830761    8956 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:41:08.833347    8956 cni.go:84] Creating CNI manager for ""
	I0914 23:41:08.833385    8956 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:41:08.833413    8956 start.go:340] cluster config:
	{Name:stopped-upgrade-438000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51261 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-438000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0914 23:41:08.833462    8956 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:41:08.841698    8956 out.go:177] * Starting "stopped-upgrade-438000" primary control-plane node in "stopped-upgrade-438000" cluster
	I0914 23:41:08.845776    8956 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0914 23:41:08.845792    8956 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0914 23:41:08.845803    8956 cache.go:56] Caching tarball of preloaded images
	I0914 23:41:08.845859    8956 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:41:08.845864    8956 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0914 23:41:08.845923    8956 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/config.json ...
	I0914 23:41:08.846346    8956 start.go:360] acquireMachinesLock for stopped-upgrade-438000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:41:08.846378    8956 start.go:364] duration metric: took 25.708µs to acquireMachinesLock for "stopped-upgrade-438000"
	I0914 23:41:08.846385    8956 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:41:08.846391    8956 fix.go:54] fixHost starting: 
	I0914 23:41:08.846491    8956 fix.go:112] recreateIfNeeded on stopped-upgrade-438000: state=Stopped err=<nil>
	W0914 23:41:08.846501    8956 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:41:08.854716    8956 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-438000" ...
	I0914 23:41:08.858729    8956 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:41:08.858801    8956 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/stopped-upgrade-438000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/stopped-upgrade-438000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/stopped-upgrade-438000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51229-:22,hostfwd=tcp::51230-:2376,hostname=stopped-upgrade-438000 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/stopped-upgrade-438000/disk.qcow2
	I0914 23:41:08.907244    8956 main.go:141] libmachine: STDOUT: 
	I0914 23:41:08.907264    8956 main.go:141] libmachine: STDERR: 
	I0914 23:41:08.907272    8956 main.go:141] libmachine: Waiting for VM to start (ssh -p 51229 docker@127.0.0.1)...
	I0914 23:41:29.328190    8956 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/config.json ...
	I0914 23:41:29.328760    8956 machine.go:93] provisionDockerMachine start ...
	I0914 23:41:29.328927    8956 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:29.329368    8956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103371190] 0x1033739d0 <nil>  [] 0s} localhost 51229 <nil> <nil>}
	I0914 23:41:29.329381    8956 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 23:41:29.418253    8956 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 23:41:29.418283    8956 buildroot.go:166] provisioning hostname "stopped-upgrade-438000"
	I0914 23:41:29.418448    8956 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:29.418718    8956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103371190] 0x1033739d0 <nil>  [] 0s} localhost 51229 <nil> <nil>}
	I0914 23:41:29.418733    8956 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-438000 && echo "stopped-upgrade-438000" | sudo tee /etc/hostname
	I0914 23:41:29.501331    8956 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-438000
	
	I0914 23:41:29.501422    8956 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:29.501608    8956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103371190] 0x1033739d0 <nil>  [] 0s} localhost 51229 <nil> <nil>}
	I0914 23:41:29.501620    8956 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-438000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-438000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-438000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 23:41:29.571316    8956 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 23:41:29.571327    8956 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19644-6577/.minikube CaCertPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19644-6577/.minikube}
	I0914 23:41:29.571339    8956 buildroot.go:174] setting up certificates
	I0914 23:41:29.571345    8956 provision.go:84] configureAuth start
	I0914 23:41:29.571350    8956 provision.go:143] copyHostCerts
	I0914 23:41:29.571412    8956 exec_runner.go:144] found /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.pem, removing ...
	I0914 23:41:29.571431    8956 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.pem
	I0914 23:41:29.571541    8956 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.pem (1082 bytes)
	I0914 23:41:29.571723    8956 exec_runner.go:144] found /Users/jenkins/minikube-integration/19644-6577/.minikube/cert.pem, removing ...
	I0914 23:41:29.571726    8956 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19644-6577/.minikube/cert.pem
	I0914 23:41:29.571769    8956 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19644-6577/.minikube/cert.pem (1123 bytes)
	I0914 23:41:29.571887    8956 exec_runner.go:144] found /Users/jenkins/minikube-integration/19644-6577/.minikube/key.pem, removing ...
	I0914 23:41:29.571890    8956 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19644-6577/.minikube/key.pem
	I0914 23:41:29.571932    8956 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19644-6577/.minikube/key.pem (1679 bytes)
	I0914 23:41:29.572029    8956 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-438000 san=[127.0.0.1 localhost minikube stopped-upgrade-438000]
	I0914 23:41:29.641290    8956 provision.go:177] copyRemoteCerts
	I0914 23:41:29.641340    8956 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 23:41:29.641349    8956 sshutil.go:53] new ssh client: &{IP:localhost Port:51229 SSHKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/stopped-upgrade-438000/id_rsa Username:docker}
	I0914 23:41:29.675478    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 23:41:29.682190    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0914 23:41:29.689043    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 23:41:29.696364    8956 provision.go:87] duration metric: took 125.011917ms to configureAuth
	I0914 23:41:29.696374    8956 buildroot.go:189] setting minikube options for container-runtime
	I0914 23:41:29.696479    8956 config.go:182] Loaded profile config "stopped-upgrade-438000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 23:41:29.696526    8956 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:29.696609    8956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103371190] 0x1033739d0 <nil>  [] 0s} localhost 51229 <nil> <nil>}
	I0914 23:41:29.696614    8956 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0914 23:41:29.761412    8956 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0914 23:41:29.761425    8956 buildroot.go:70] root file system type: tmpfs
	I0914 23:41:29.761480    8956 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0914 23:41:29.761529    8956 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:29.761627    8956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103371190] 0x1033739d0 <nil>  [] 0s} localhost 51229 <nil> <nil>}
	I0914 23:41:29.761662    8956 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0914 23:41:29.830745    8956 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0914 23:41:29.830810    8956 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:29.830922    8956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103371190] 0x1033739d0 <nil>  [] 0s} localhost 51229 <nil> <nil>}
	I0914 23:41:29.830930    8956 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0914 23:41:30.204597    8956 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0914 23:41:30.204612    8956 machine.go:96] duration metric: took 875.861209ms to provisionDockerMachine
	I0914 23:41:30.204619    8956 start.go:293] postStartSetup for "stopped-upgrade-438000" (driver="qemu2")
	I0914 23:41:30.204625    8956 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 23:41:30.204698    8956 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 23:41:30.204709    8956 sshutil.go:53] new ssh client: &{IP:localhost Port:51229 SSHKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/stopped-upgrade-438000/id_rsa Username:docker}
	I0914 23:41:30.239334    8956 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 23:41:30.240768    8956 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 23:41:30.240778    8956 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19644-6577/.minikube/addons for local assets ...
	I0914 23:41:30.240867    8956 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19644-6577/.minikube/files for local assets ...
	I0914 23:41:30.240965    8956 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19644-6577/.minikube/files/etc/ssl/certs/70932.pem -> 70932.pem in /etc/ssl/certs
	I0914 23:41:30.241087    8956 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 23:41:30.244362    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/files/etc/ssl/certs/70932.pem --> /etc/ssl/certs/70932.pem (1708 bytes)
	I0914 23:41:30.252029    8956 start.go:296] duration metric: took 47.403916ms for postStartSetup
	I0914 23:41:30.252049    8956 fix.go:56] duration metric: took 21.406067334s for fixHost
	I0914 23:41:30.252102    8956 main.go:141] libmachine: Using SSH client type: native
	I0914 23:41:30.252222    8956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103371190] 0x1033739d0 <nil>  [] 0s} localhost 51229 <nil> <nil>}
	I0914 23:41:30.252229    8956 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 23:41:30.317823    8956 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726382490.633243629
	
	I0914 23:41:30.317833    8956 fix.go:216] guest clock: 1726382490.633243629
	I0914 23:41:30.317838    8956 fix.go:229] Guest: 2024-09-14 23:41:30.633243629 -0700 PDT Remote: 2024-09-14 23:41:30.252051 -0700 PDT m=+21.518313959 (delta=381.192629ms)
	I0914 23:41:30.317850    8956 fix.go:200] guest clock delta is within tolerance: 381.192629ms
	I0914 23:41:30.317852    8956 start.go:83] releasing machines lock for "stopped-upgrade-438000", held for 21.471879333s
	I0914 23:41:30.317940    8956 ssh_runner.go:195] Run: cat /version.json
	I0914 23:41:30.317949    8956 sshutil.go:53] new ssh client: &{IP:localhost Port:51229 SSHKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/stopped-upgrade-438000/id_rsa Username:docker}
	I0914 23:41:30.317955    8956 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 23:41:30.317971    8956 sshutil.go:53] new ssh client: &{IP:localhost Port:51229 SSHKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/stopped-upgrade-438000/id_rsa Username:docker}
	W0914 23:41:30.318613    8956 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:51446->127.0.0.1:51229: write: broken pipe
	I0914 23:41:30.318633    8956 retry.go:31] will retry after 306.629026ms: ssh: handshake failed: write tcp 127.0.0.1:51446->127.0.0.1:51229: write: broken pipe
	W0914 23:41:30.352896    8956 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0914 23:41:30.352971    8956 ssh_runner.go:195] Run: systemctl --version
	I0914 23:41:30.355294    8956 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 23:41:30.357291    8956 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 23:41:30.357347    8956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0914 23:41:30.360675    8956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0914 23:41:30.365647    8956 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 23:41:30.365669    8956 start.go:495] detecting cgroup driver to use...
	I0914 23:41:30.365852    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 23:41:30.373158    8956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0914 23:41:30.376300    8956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 23:41:30.379433    8956 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 23:41:30.379468    8956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 23:41:30.382442    8956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 23:41:30.385480    8956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 23:41:30.388717    8956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 23:41:30.392167    8956 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 23:41:30.395969    8956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 23:41:30.399163    8956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0914 23:41:30.401828    8956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0914 23:41:30.405012    8956 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 23:41:30.408383    8956 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 23:41:30.411473    8956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:41:30.476234    8956 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 23:41:30.482760    8956 start.go:495] detecting cgroup driver to use...
	I0914 23:41:30.482833    8956 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0914 23:41:30.488971    8956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 23:41:30.496244    8956 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 23:41:30.502733    8956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 23:41:30.507677    8956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 23:41:30.512225    8956 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 23:41:30.552018    8956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 23:41:30.557668    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 23:41:30.563573    8956 ssh_runner.go:195] Run: which cri-dockerd
	I0914 23:41:30.564865    8956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0914 23:41:30.569150    8956 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0914 23:41:30.575674    8956 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0914 23:41:30.653528    8956 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0914 23:41:30.724099    8956 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0914 23:41:30.724157    8956 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0914 23:41:30.731689    8956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:41:30.804416    8956 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 23:41:31.930539    8956 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.12611625s)
	I0914 23:41:31.930653    8956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0914 23:41:31.935939    8956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0914 23:41:31.941156    8956 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0914 23:41:32.015581    8956 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 23:41:32.083271    8956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:41:32.143314    8956 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0914 23:41:32.148734    8956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0914 23:41:32.153524    8956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:41:32.222710    8956 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0914 23:41:32.260411    8956 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0914 23:41:32.260513    8956 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0914 23:41:32.263204    8956 start.go:563] Will wait 60s for crictl version
	I0914 23:41:32.263263    8956 ssh_runner.go:195] Run: which crictl
	I0914 23:41:32.264686    8956 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 23:41:32.279410    8956 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0914 23:41:32.279491    8956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 23:41:32.295307    8956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 23:41:32.313290    8956 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0914 23:41:32.313372    8956 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0914 23:41:32.314641    8956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 23:41:32.318098    8956 kubeadm.go:883] updating cluster {Name:stopped-upgrade-438000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51261 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-438000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0914 23:41:32.318147    8956 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0914 23:41:32.318198    8956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 23:41:32.328366    8956 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0914 23:41:32.328375    8956 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0914 23:41:32.328433    8956 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 23:41:32.332173    8956 ssh_runner.go:195] Run: which lz4
	I0914 23:41:32.333658    8956 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 23:41:32.334984    8956 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 23:41:32.334995    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0914 23:41:33.260560    8956 docker.go:649] duration metric: took 926.967042ms to copy over tarball
	I0914 23:41:33.260628    8956 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 23:41:34.416460    8956 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.155840166s)
	I0914 23:41:34.416473    8956 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 23:41:34.432310    8956 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 23:41:34.436073    8956 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0914 23:41:34.441451    8956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:41:34.501550    8956 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 23:41:35.661199    8956 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.159655s)
	I0914 23:41:35.661307    8956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 23:41:35.673180    8956 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0914 23:41:35.673188    8956 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0914 23:41:35.673193    8956 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 23:41:35.677715    8956 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:41:35.680088    8956 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0914 23:41:35.682796    8956 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0914 23:41:35.683232    8956 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:41:35.685417    8956 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0914 23:41:35.685627    8956 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 23:41:35.687730    8956 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0914 23:41:35.687953    8956 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0914 23:41:35.689553    8956 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0914 23:41:35.689636    8956 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 23:41:35.690891    8956 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0914 23:41:35.691008    8956 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0914 23:41:35.692471    8956 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 23:41:35.694538    8956 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0914 23:41:35.694629    8956 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0914 23:41:35.696235    8956 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 23:41:36.070596    8956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0914 23:41:36.082071    8956 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0914 23:41:36.082095    8956 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0914 23:41:36.082176    8956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0914 23:41:36.086507    8956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0914 23:41:36.094324    8956 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0914 23:41:36.094462    8956 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0914 23:41:36.104058    8956 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0914 23:41:36.104079    8956 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0914 23:41:36.104115    8956 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0914 23:41:36.104133    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0914 23:41:36.104146    8956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0914 23:41:36.106650    8956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 23:41:36.109606    8956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0914 23:41:36.123510    8956 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0914 23:41:36.123698    8956 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0914 23:41:36.125438    8956 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0914 23:41:36.125448    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0914 23:41:36.143841    8956 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0914 23:41:36.143864    8956 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 23:41:36.143887    8956 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0914 23:41:36.143900    8956 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0914 23:41:36.143941    8956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0914 23:41:36.143941    8956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 23:41:36.143992    8956 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0914 23:41:36.144043    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0914 23:41:36.144656    8956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0914 23:41:36.148665    8956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0914 23:41:36.178212    8956 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0914 23:41:36.182334    8956 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0914 23:41:36.182520    8956 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	W0914 23:41:36.183552    8956 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0914 23:41:36.183688    8956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0914 23:41:36.222828    8956 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0914 23:41:36.222850    8956 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0914 23:41:36.222861    8956 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0914 23:41:36.222860    8956 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0914 23:41:36.222948    8956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0914 23:41:36.222950    8956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0914 23:41:36.241649    8956 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0914 23:41:36.241674    8956 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 23:41:36.241740    8956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0914 23:41:36.267759    8956 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0914 23:41:36.268335    8956 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0914 23:41:36.307324    8956 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0914 23:41:36.307470    8956 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0914 23:41:36.320529    8956 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0914 23:41:36.320563    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0914 23:41:36.411600    8956 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0914 23:41:36.411626    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0914 23:41:36.474611    8956 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0914 23:41:36.474749    8956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:41:36.518376    8956 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0914 23:41:36.518468    8956 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0914 23:41:36.518495    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0914 23:41:36.525660    8956 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0914 23:41:36.525685    8956 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:41:36.525758    8956 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:41:36.681446    8956 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0914 23:41:36.681483    8956 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 23:41:36.681627    8956 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0914 23:41:36.683067    8956 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0914 23:41:36.683081    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0914 23:41:36.713715    8956 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 23:41:36.713728    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0914 23:41:36.971146    8956 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 23:41:36.971190    8956 cache_images.go:92] duration metric: took 1.298005625s to LoadCachedImages
	W0914 23:41:36.971243    8956 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0914 23:41:36.971255    8956 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0914 23:41:36.971318    8956 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-438000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-438000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 23:41:36.971399    8956 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0914 23:41:36.987760    8956 cni.go:84] Creating CNI manager for ""
	I0914 23:41:36.987772    8956 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:41:36.987777    8956 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 23:41:36.987788    8956 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-438000 NodeName:stopped-upgrade-438000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 23:41:36.987854    8956 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-438000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 23:41:36.987926    8956 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0914 23:41:36.991532    8956 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 23:41:36.991581    8956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 23:41:36.994565    8956 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0914 23:41:36.999649    8956 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 23:41:37.004976    8956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0914 23:41:37.011461    8956 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0914 23:41:37.013147    8956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 23:41:37.017058    8956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:41:37.084242    8956 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 23:41:37.089946    8956 certs.go:68] Setting up /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000 for IP: 10.0.2.15
	I0914 23:41:37.089986    8956 certs.go:194] generating shared ca certs ...
	I0914 23:41:37.089998    8956 certs.go:226] acquiring lock for ca certs: {Name:mkfb6b8e69b171081d1b5cff0d9e65dd76b6a9c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:41:37.090276    8956 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.key
	I0914 23:41:37.090335    8956 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/proxy-client-ca.key
	I0914 23:41:37.090344    8956 certs.go:256] generating profile certs ...
	I0914 23:41:37.090425    8956 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/client.key
	I0914 23:41:37.090439    8956 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/apiserver.key.85ef2f10
	I0914 23:41:37.090449    8956 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/apiserver.crt.85ef2f10 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0914 23:41:37.172424    8956 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/apiserver.crt.85ef2f10 ...
	I0914 23:41:37.172441    8956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/apiserver.crt.85ef2f10: {Name:mk21423c72c1ff74f64f5cd6e1e5865c0f9ee4cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:41:37.172722    8956 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/apiserver.key.85ef2f10 ...
	I0914 23:41:37.172728    8956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/apiserver.key.85ef2f10: {Name:mkb8833ac504d17eecb93561bd81ae06f7603029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:41:37.172861    8956 certs.go:381] copying /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/apiserver.crt.85ef2f10 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/apiserver.crt
	I0914 23:41:37.173002    8956 certs.go:385] copying /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/apiserver.key.85ef2f10 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/apiserver.key
	I0914 23:41:37.173167    8956 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/proxy-client.key
	I0914 23:41:37.173300    8956 certs.go:484] found cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/7093.pem (1338 bytes)
	W0914 23:41:37.173332    8956 certs.go:480] ignoring /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/7093_empty.pem, impossibly tiny 0 bytes
	I0914 23:41:37.173338    8956 certs.go:484] found cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca-key.pem (1675 bytes)
	I0914 23:41:37.173364    8956 certs.go:484] found cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem (1082 bytes)
	I0914 23:41:37.173389    8956 certs.go:484] found cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem (1123 bytes)
	I0914 23:41:37.173415    8956 certs.go:484] found cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/key.pem (1679 bytes)
	I0914 23:41:37.173466    8956 certs.go:484] found cert: /Users/jenkins/minikube-integration/19644-6577/.minikube/files/etc/ssl/certs/70932.pem (1708 bytes)
	I0914 23:41:37.173922    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 23:41:37.180912    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 23:41:37.188275    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 23:41:37.196293    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 23:41:37.204512    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 23:41:37.212646    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 23:41:37.219636    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 23:41:37.226567    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 23:41:37.233530    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/files/etc/ssl/certs/70932.pem --> /usr/share/ca-certificates/70932.pem (1708 bytes)
	I0914 23:41:37.240618    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 23:41:37.247958    8956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/7093.pem --> /usr/share/ca-certificates/7093.pem (1338 bytes)
	I0914 23:41:37.255770    8956 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 23:41:37.261654    8956 ssh_runner.go:195] Run: openssl version
	I0914 23:41:37.264062    8956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 23:41:37.267462    8956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 23:41:37.268985    8956 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:40 /usr/share/ca-certificates/minikubeCA.pem
	I0914 23:41:37.269011    8956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 23:41:37.270828    8956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 23:41:37.274017    8956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7093.pem && ln -fs /usr/share/ca-certificates/7093.pem /etc/ssl/certs/7093.pem"
	I0914 23:41:37.276986    8956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7093.pem
	I0914 23:41:37.278247    8956 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 06:29 /usr/share/ca-certificates/7093.pem
	I0914 23:41:37.278272    8956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7093.pem
	I0914 23:41:37.279815    8956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7093.pem /etc/ssl/certs/51391683.0"
	I0914 23:41:37.282780    8956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70932.pem && ln -fs /usr/share/ca-certificates/70932.pem /etc/ssl/certs/70932.pem"
	I0914 23:41:37.285744    8956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70932.pem
	I0914 23:41:37.287213    8956 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 06:29 /usr/share/ca-certificates/70932.pem
	I0914 23:41:37.287239    8956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70932.pem
	I0914 23:41:37.289094    8956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/70932.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 23:41:37.292560    8956 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 23:41:37.294276    8956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 23:41:37.296316    8956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 23:41:37.298303    8956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 23:41:37.300789    8956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 23:41:37.302819    8956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 23:41:37.304477    8956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 23:41:37.306391    8956 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-438000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51261 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-438000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0914 23:41:37.306462    8956 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 23:41:37.316684    8956 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 23:41:37.319876    8956 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 23:41:37.319883    8956 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 23:41:37.319910    8956 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 23:41:37.322881    8956 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:41:37.322922    8956 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-438000" does not appear in /Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:41:37.322937    8956 kubeconfig.go:62] /Users/jenkins/minikube-integration/19644-6577/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-438000" cluster setting kubeconfig missing "stopped-upgrade-438000" context setting]
	I0914 23:41:37.323096    8956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/kubeconfig: {Name:mke334fd43bb51604954449e74caf7f81dee5b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:41:37.323745    8956 kapi.go:59] client config for stopped-upgrade-438000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/client.key", CAFile:"/Users/jenkins/minikube-integration/19644-6577/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104949800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 23:41:37.324761    8956 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 23:41:37.327723    8956 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-438000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0914 23:41:37.327740    8956 kubeadm.go:1160] stopping kube-system containers ...
	I0914 23:41:37.327791    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 23:41:37.338833    8956 docker.go:483] Stopping containers: [9405ac203f41 6f2907013b5d 87eeb9536e45 ba34e94c3930 9edbecfd3df2 1faf6553ac06 72775498364e d019fc00a42a]
	I0914 23:41:37.338911    8956 ssh_runner.go:195] Run: docker stop 9405ac203f41 6f2907013b5d 87eeb9536e45 ba34e94c3930 9edbecfd3df2 1faf6553ac06 72775498364e d019fc00a42a
	I0914 23:41:37.349220    8956 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 23:41:37.355142    8956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 23:41:37.357831    8956 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 23:41:37.357837    8956 kubeadm.go:157] found existing configuration files:
	
	I0914 23:41:37.357864    8956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/admin.conf
	I0914 23:41:37.360518    8956 kubeadm.go:163] "https://control-plane.minikube.internal:51261" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 23:41:37.360543    8956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 23:41:37.363583    8956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/kubelet.conf
	I0914 23:41:37.366167    8956 kubeadm.go:163] "https://control-plane.minikube.internal:51261" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 23:41:37.366204    8956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 23:41:37.368737    8956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/controller-manager.conf
	I0914 23:41:37.371766    8956 kubeadm.go:163] "https://control-plane.minikube.internal:51261" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 23:41:37.371791    8956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 23:41:37.374643    8956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/scheduler.conf
	I0914 23:41:37.377008    8956 kubeadm.go:163] "https://control-plane.minikube.internal:51261" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 23:41:37.377036    8956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 23:41:37.379720    8956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 23:41:37.382321    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:41:37.403484    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:41:37.934575    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:41:38.061946    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:41:38.085583    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:41:38.111815    8956 api_server.go:52] waiting for apiserver process to appear ...
	I0914 23:41:38.111906    8956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:41:38.614109    8956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:41:39.112710    8956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:41:39.116686    8956 api_server.go:72] duration metric: took 1.00489275s to wait for apiserver process to appear ...
	I0914 23:41:39.116696    8956 api_server.go:88] waiting for apiserver healthz status ...
	I0914 23:41:39.116706    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:41:44.118310    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:41:44.118342    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:41:49.118645    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:41:49.118677    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:41:54.118847    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:41:54.118904    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:41:59.119314    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:41:59.119371    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:04.120246    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:04.120335    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:09.121305    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:09.121358    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:14.122959    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:14.123009    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:19.124593    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:19.124674    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:24.126882    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:24.126905    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:29.129000    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:29.129043    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:34.131199    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:34.131226    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:39.133091    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:39.133560    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:42:39.167558    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:42:39.167709    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:42:39.187553    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:42:39.187663    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:42:39.202428    8956 logs.go:276] 0 containers: []
	W0914 23:42:39.202439    8956 logs.go:278] No container was found matching "coredns"
	I0914 23:42:39.202506    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:42:39.214852    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:42:39.214939    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:42:39.225452    8956 logs.go:276] 0 containers: []
	W0914 23:42:39.225465    8956 logs.go:278] No container was found matching "kube-proxy"
	I0914 23:42:39.225532    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:42:39.236733    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:42:39.236807    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:42:39.246778    8956 logs.go:276] 0 containers: []
	W0914 23:42:39.246790    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:42:39.246859    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:42:39.256610    8956 logs.go:276] 0 containers: []
	W0914 23:42:39.256625    8956 logs.go:278] No container was found matching "storage-provisioner"
	I0914 23:42:39.256629    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:42:39.256635    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:42:39.268036    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:42:39.268047    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:42:39.378479    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:42:39.378490    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:42:39.392893    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:42:39.392903    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:42:39.407568    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:42:39.407578    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:42:39.423702    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:42:39.423713    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:42:39.446737    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:42:39.446749    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:42:39.465107    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:42:39.465116    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:42:39.487880    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:42:39.487890    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:42:39.514955    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:42:39.514964    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:42:39.519256    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:42:39.519263    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:42:39.536340    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:42:39.536350    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:42:39.553396    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:42:39.553405    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:42:42.078537    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:47.081132    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:47.081416    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:42:47.111559    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:42:47.111693    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:42:47.129602    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:42:47.129706    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:42:47.142712    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:42:47.142807    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:42:47.154237    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:42:47.154318    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:42:47.164583    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:42:47.164669    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:42:47.177313    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:42:47.177394    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:42:47.187255    8956 logs.go:276] 0 containers: []
	W0914 23:42:47.187269    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:42:47.187337    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:42:47.202106    8956 logs.go:276] 1 containers: [bbe9ac8055ea]
	I0914 23:42:47.202125    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:42:47.202130    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:42:47.215886    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:42:47.215896    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:42:47.230745    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:42:47.230756    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:42:47.247792    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:42:47.247803    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:42:47.274647    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:42:47.274654    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:42:47.288955    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:42:47.288965    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:42:47.302203    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:42:47.302213    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:42:47.314051    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:42:47.314061    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:42:47.318758    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:42:47.318764    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:42:47.331150    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:42:47.331160    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:42:47.354088    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:42:47.354099    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:42:47.371709    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:42:47.371718    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:42:47.410244    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:42:47.410265    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:42:47.422220    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:42:47.422236    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:42:47.440105    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:42:47.440120    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:42:47.451581    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:42:47.451591    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:42:49.979273    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:42:54.981427    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:42:54.981688    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:42:55.003796    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:42:55.003926    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:42:55.018873    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:42:55.018964    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:42:55.035618    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:42:55.035707    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:42:55.050739    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:42:55.050822    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:42:55.063894    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:42:55.063975    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:42:55.074532    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:42:55.074616    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:42:55.085114    8956 logs.go:276] 0 containers: []
	W0914 23:42:55.085124    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:42:55.085195    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:42:55.095806    8956 logs.go:276] 1 containers: [bbe9ac8055ea]
	I0914 23:42:55.095828    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:42:55.095834    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:42:55.110384    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:42:55.110394    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:42:55.128582    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:42:55.128592    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:42:55.139987    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:42:55.140000    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:42:55.168163    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:42:55.168170    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:42:55.181402    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:42:55.181412    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:42:55.192637    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:42:55.192647    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:42:55.210339    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:42:55.210350    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:42:55.248194    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:42:55.248204    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:42:55.262086    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:42:55.262096    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:42:55.277591    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:42:55.277602    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:42:55.302463    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:42:55.302470    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:42:55.314357    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:42:55.314368    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:42:55.332662    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:42:55.332671    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:42:55.355280    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:42:55.355296    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:42:55.359525    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:42:55.359533    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:42:57.882053    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:02.884173    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:02.884427    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:02.904666    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:43:02.904782    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:02.919731    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:43:02.919825    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:02.931314    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:43:02.931388    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:02.942255    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:43:02.942368    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:02.952732    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:43:02.952816    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:02.968153    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:43:02.968247    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:02.980254    8956 logs.go:276] 0 containers: []
	W0914 23:43:02.980265    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:02.980336    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:02.991045    8956 logs.go:276] 1 containers: [bbe9ac8055ea]
	I0914 23:43:02.991061    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:02.991067    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:03.017822    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:03.017830    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:03.021747    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:43:03.021755    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:43:03.045025    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:43:03.045035    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:43:03.060849    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:03.060859    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:03.086776    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:43:03.086783    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:43:03.100105    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:43:03.100115    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:43:03.116090    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:43:03.116104    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:43:03.127756    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:03.127766    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:03.175214    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:43:03.175225    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:43:03.194013    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:43:03.194025    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:43:03.213623    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:43:03.213632    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:43:03.228227    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:43:03.228238    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:43:03.241246    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:43:03.241257    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:43:03.252792    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:43:03.252802    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:43:03.263671    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:43:03.263681    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:05.776707    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:10.777258    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:10.777425    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:10.790496    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:43:10.790585    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:10.801889    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:43:10.801974    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:10.812149    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:43:10.812235    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:10.822613    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:43:10.822703    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:10.833638    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:43:10.833719    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:10.844135    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:43:10.844224    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:10.854263    8956 logs.go:276] 0 containers: []
	W0914 23:43:10.854274    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:10.854340    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:10.864825    8956 logs.go:276] 1 containers: [bbe9ac8055ea]
	I0914 23:43:10.864843    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:43:10.864849    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:43:10.882882    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:10.882896    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:10.909457    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:43:10.909465    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:43:10.928349    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:10.928363    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:10.932557    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:10.932563    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:10.968026    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:43:10.968037    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:43:10.980572    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:43:10.980583    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:43:10.991588    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:43:10.991599    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:43:11.007515    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:43:11.007527    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:43:11.018647    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:43:11.018659    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:11.036515    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:11.036527    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:11.064380    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:43:11.064387    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:43:11.078176    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:43:11.078190    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:43:11.092567    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:43:11.092582    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:43:11.119506    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:43:11.119523    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:43:11.131819    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:43:11.131831    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:43:13.647737    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:18.649831    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:18.650033    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:18.668930    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:43:18.669037    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:18.685728    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:43:18.685817    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:18.697748    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:43:18.697831    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:18.708553    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:43:18.708637    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:18.719478    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:43:18.719552    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:18.730468    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:43:18.730563    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:18.742128    8956 logs.go:276] 0 containers: []
	W0914 23:43:18.742141    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:18.742216    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:18.753279    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:43:18.753296    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:18.753301    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:18.784769    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:18.784779    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:18.789198    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:43:18.789204    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:43:18.805271    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:43:18.805282    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:43:18.820230    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:43:18.820248    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:43:18.848819    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:43:18.848832    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:43:18.871170    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:43:18.871180    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:43:18.883266    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:43:18.883281    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:43:18.895079    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:43:18.895092    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:43:18.909028    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:43:18.909038    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:43:18.927275    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:43:18.927284    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:18.939114    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:43:18.939125    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:43:18.954670    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:18.954680    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:18.998858    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:43:18.998868    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:43:19.014073    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:43:19.014083    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:43:19.027027    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:43:19.027040    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:43:19.039297    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:19.039306    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:21.566600    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:26.567995    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:26.568206    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:26.584525    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:43:26.584629    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:26.596787    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:43:26.596874    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:26.611680    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:43:26.611759    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:26.622026    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:43:26.622105    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:26.632617    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:43:26.632709    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:26.643101    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:43:26.643170    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:26.653742    8956 logs.go:276] 0 containers: []
	W0914 23:43:26.653757    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:26.653829    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:26.667915    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:43:26.667933    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:26.667939    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:26.696231    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:26.696242    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:26.700638    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:26.700644    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:26.736934    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:43:26.736946    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:43:26.772785    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:43:26.772802    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:43:26.784970    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:26.784982    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:26.809647    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:43:26.809659    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:43:26.823260    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:43:26.823271    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:43:26.837359    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:43:26.837373    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:43:26.848563    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:43:26.848575    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:43:26.864094    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:43:26.864102    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:43:26.881423    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:43:26.881433    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:43:26.892830    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:43:26.892840    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:26.904775    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:43:26.904784    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:43:26.917804    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:43:26.917815    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:43:26.932559    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:43:26.932569    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:43:26.944075    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:43:26.944084    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:43:29.462997    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:34.465134    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:34.465392    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:34.488886    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:43:34.488997    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:34.505912    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:43:34.506007    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:34.519383    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:43:34.519467    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:34.531036    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:43:34.531121    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:34.541522    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:43:34.541596    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:34.551915    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:43:34.551984    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:34.562002    8956 logs.go:276] 0 containers: []
	W0914 23:43:34.562021    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:34.562115    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:34.572503    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:43:34.572522    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:43:34.572528    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:43:34.585998    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:43:34.586009    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:34.597892    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:43:34.597903    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:43:34.609675    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:34.609686    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:34.614448    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:43:34.614457    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:43:34.626785    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:43:34.626795    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:43:34.638460    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:43:34.638470    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:43:34.652615    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:43:34.652625    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:43:34.679656    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:43:34.679670    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:43:34.695405    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:43:34.695415    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:43:34.715661    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:34.715673    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:34.744996    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:43:34.745009    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:43:34.760428    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:43:34.760441    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:43:34.774966    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:43:34.774978    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:43:34.786743    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:34.786754    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:34.822525    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:43:34.822538    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:43:34.840615    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:34.840627    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:37.368167    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:42.368973    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:42.369164    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:42.384273    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:43:42.384376    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:42.395998    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:43:42.396078    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:42.407208    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:43:42.407293    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:42.419184    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:43:42.419271    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:42.429126    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:43:42.429201    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:42.439811    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:43:42.439882    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:42.449997    8956 logs.go:276] 0 containers: []
	W0914 23:43:42.450008    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:42.450068    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:42.460235    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:43:42.460257    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:43:42.460264    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:43:42.474532    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:43:42.474544    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:43:42.486519    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:43:42.486529    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:43:42.510730    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:43:42.510742    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:43:42.525569    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:42.525581    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:42.551501    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:42.551508    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:42.578395    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:42.578401    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:42.612230    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:43:42.612244    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:43:42.626073    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:43:42.626082    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:43:42.637203    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:43:42.637214    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:43:42.659355    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:43:42.659366    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:43:42.670817    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:43:42.670828    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:43:42.689341    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:43:42.689350    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:43:42.707803    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:43:42.707820    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:42.720469    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:42.720483    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:42.725043    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:43:42.725049    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:43:42.738123    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:43:42.738133    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:43:45.253843    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:50.256088    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:50.256292    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:50.269724    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:43:50.269817    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:50.281335    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:43:50.281416    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:50.291737    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:43:50.291820    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:50.302508    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:43:50.302598    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:50.314425    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:43:50.314507    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:50.324912    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:43:50.324992    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:50.334925    8956 logs.go:276] 0 containers: []
	W0914 23:43:50.334936    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:50.335009    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:50.349103    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:43:50.349122    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:43:50.349127    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:43:50.363206    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:43:50.363218    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:43:50.380342    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:50.380352    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:50.384929    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:43:50.384938    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:43:50.398830    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:43:50.398842    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:43:50.410078    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:43:50.410090    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:43:50.429531    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:43:50.429541    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:43:50.446795    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:43:50.446804    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:43:50.458760    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:50.458773    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:50.484525    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:50.484533    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:50.513126    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:50.513134    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:50.553822    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:43:50.553835    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:43:50.567412    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:43:50.567422    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:43:50.586825    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:43:50.586834    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:43:50.598727    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:43:50.598743    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:50.610619    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:43:50.610629    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:43:50.633499    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:43:50.633510    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:43:53.151629    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:43:58.153853    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:43:58.154157    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:43:58.180136    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:43:58.180279    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:43:58.196416    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:43:58.196499    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:43:58.209558    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:43:58.209648    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:43:58.221026    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:43:58.221114    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:43:58.231606    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:43:58.231680    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:43:58.242268    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:43:58.242347    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:43:58.253109    8956 logs.go:276] 0 containers: []
	W0914 23:43:58.253124    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:43:58.253192    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:43:58.263561    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:43:58.263579    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:43:58.263585    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:43:58.275580    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:43:58.275594    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:43:58.287004    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:43:58.287015    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:43:58.312261    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:43:58.312273    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:43:58.324239    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:43:58.324249    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:43:58.341246    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:43:58.341256    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:43:58.345600    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:43:58.345609    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:43:58.356941    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:43:58.356951    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:43:58.368028    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:43:58.368042    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:43:58.394228    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:43:58.394238    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:43:58.422562    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:43:58.422570    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:43:58.437204    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:43:58.437214    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:43:58.452776    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:43:58.452786    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:43:58.470794    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:43:58.470803    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:43:58.507363    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:43:58.507372    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:43:58.521795    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:43:58.521803    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:43:58.534580    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:43:58.534591    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:44:01.050407    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:06.051217    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:06.051445    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:06.070734    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:44:06.070851    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:06.084664    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:44:06.084760    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:06.096834    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:44:06.096916    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:06.109090    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:44:06.109181    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:06.120103    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:44:06.120183    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:06.130530    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:44:06.130608    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:06.140971    8956 logs.go:276] 0 containers: []
	W0914 23:44:06.140984    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:06.141051    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:06.151576    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:44:06.151597    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:06.151602    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:06.156014    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:44:06.156020    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:44:06.169432    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:44:06.169446    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:44:06.186164    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:44:06.186175    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:44:06.199780    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:44:06.199794    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:44:06.217973    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:06.217983    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:06.256182    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:44:06.256198    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:44:06.268516    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:44:06.268528    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:44:06.293926    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:44:06.293937    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:44:06.305595    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:44:06.305608    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:06.317691    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:06.317702    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:06.345105    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:44:06.345113    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:44:06.359057    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:44:06.359071    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:44:06.378752    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:44:06.378762    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:44:06.390675    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:44:06.390687    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:44:06.406854    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:44:06.406866    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:44:06.418998    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:06.419008    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:08.944558    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:13.945630    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:13.945829    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:13.962298    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:44:13.962395    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:13.979267    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:44:13.979353    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:13.990029    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:44:13.990106    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:14.001447    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:44:14.001534    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:14.011826    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:44:14.011904    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:14.023098    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:44:14.023177    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:14.033013    8956 logs.go:276] 0 containers: []
	W0914 23:44:14.033027    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:14.033093    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:14.047124    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:44:14.047148    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:14.047154    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:14.073972    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:44:14.073979    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:44:14.087713    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:44:14.087726    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:44:14.102078    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:44:14.102088    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:44:14.113159    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:14.113171    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:14.148685    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:44:14.148695    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:44:14.165393    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:44:14.165403    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:44:14.184231    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:44:14.184242    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:44:14.195850    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:14.195864    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:14.200360    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:44:14.200367    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:44:14.213731    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:44:14.213746    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:44:14.228080    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:14.228091    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:14.253619    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:44:14.253627    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:44:14.276818    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:44:14.276830    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:44:14.297395    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:44:14.297406    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:44:14.315273    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:44:14.315282    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:44:14.326433    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:44:14.326442    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:16.840154    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:21.842527    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:21.842675    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:21.856673    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:44:21.856765    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:21.868811    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:44:21.868903    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:21.879386    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:44:21.879466    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:21.889576    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:44:21.889656    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:21.900404    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:44:21.900475    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:21.911476    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:44:21.911543    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:21.921554    8956 logs.go:276] 0 containers: []
	W0914 23:44:21.921566    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:21.921637    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:21.932307    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:44:21.932328    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:44:21.932334    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:44:21.947163    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:44:21.947172    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:44:21.969569    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:44:21.969579    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:44:21.984600    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:21.984611    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:22.012086    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:44:22.012094    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:44:22.025590    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:44:22.025600    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:44:22.039731    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:44:22.039744    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:44:22.051326    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:44:22.051338    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:44:22.067009    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:44:22.067018    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:44:22.085324    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:22.085336    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:22.120628    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:44:22.120639    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:44:22.134524    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:22.134534    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:22.138745    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:44:22.138753    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:44:22.152042    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:44:22.152052    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:44:22.169293    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:44:22.169303    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:44:22.180838    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:22.180848    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:22.205175    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:44:22.205184    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:24.719241    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:29.721861    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:29.722209    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:29.760637    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:44:29.760761    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:29.776387    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:44:29.776484    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:29.790725    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:44:29.790809    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:29.802379    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:44:29.802468    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:29.812824    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:44:29.812908    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:29.824001    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:44:29.824081    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:29.834185    8956 logs.go:276] 0 containers: []
	W0914 23:44:29.834197    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:29.834269    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:29.843954    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:44:29.843969    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:44:29.843974    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:44:29.863389    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:44:29.863399    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:44:29.876246    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:44:29.876256    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:44:29.887760    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:44:29.887774    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:44:29.899596    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:29.899610    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:29.935927    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:44:29.935938    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:44:29.950309    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:44:29.950322    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:44:29.964547    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:44:29.964559    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:44:29.987869    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:44:29.987883    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:44:29.998986    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:29.998995    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:30.022692    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:44:30.022704    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:30.035075    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:30.035088    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:30.064991    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:30.065003    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:30.069799    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:44:30.069807    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:44:30.085738    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:44:30.085749    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:44:30.103682    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:44:30.103696    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:44:30.117531    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:44:30.117543    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:44:32.631422    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:37.632868    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:37.633403    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:37.670682    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:44:37.670847    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:37.692410    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:44:37.692549    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:37.708669    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:44:37.708758    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:37.722881    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:44:37.722966    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:37.734742    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:44:37.734827    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:37.745698    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:44:37.745775    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:37.758051    8956 logs.go:276] 0 containers: []
	W0914 23:44:37.758065    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:37.758141    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:37.774110    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:44:37.774127    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:44:37.774132    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:44:37.793293    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:44:37.793305    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:44:37.810024    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:44:37.810036    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:44:37.823141    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:44:37.823152    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:44:37.834814    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:44:37.834826    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:44:37.846253    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:37.846263    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:37.870800    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:44:37.870808    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:37.882133    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:44:37.882168    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:44:37.897197    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:44:37.897212    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:44:37.912464    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:44:37.912474    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:44:37.935467    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:44:37.935480    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:44:37.951460    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:37.951473    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:37.980036    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:37.980045    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:37.984579    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:37.984588    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:38.020124    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:44:38.020139    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:44:38.033815    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:44:38.033826    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:44:38.047428    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:44:38.047439    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:44:40.567829    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:45.570312    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:45.570448    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:45.583620    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:44:45.583708    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:45.594103    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:44:45.594186    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:45.604714    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:44:45.604794    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:45.615654    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:44:45.615731    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:45.627757    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:44:45.627838    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:45.638594    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:44:45.638676    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:45.649276    8956 logs.go:276] 0 containers: []
	W0914 23:44:45.649292    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:45.649367    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:45.659494    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:44:45.659513    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:45.659519    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:45.664138    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:44:45.664147    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:44:45.676846    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:44:45.676856    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:44:45.691670    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:44:45.691681    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:44:45.711828    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:44:45.711839    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:44:45.729657    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:45.729668    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:45.753118    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:44:45.753126    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:44:45.768677    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:45.768687    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:45.803811    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:44:45.803822    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:44:45.817828    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:44:45.817841    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:44:45.829112    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:44:45.829123    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:44:45.841046    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:44:45.841057    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:44:45.852526    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:45.852536    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:45.885650    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:44:45.885667    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:44:45.908595    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:44:45.908612    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:44:45.926507    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:44:45.926517    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:44:45.938451    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:44:45.938462    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:48.452105    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:44:53.454395    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:44:53.454574    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:44:53.465429    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:44:53.465517    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:44:53.476079    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:44:53.476152    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:44:53.486570    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:44:53.486648    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:44:53.498812    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:44:53.498899    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:44:53.508941    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:44:53.509027    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:44:53.523079    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:44:53.523168    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:44:53.533234    8956 logs.go:276] 0 containers: []
	W0914 23:44:53.533247    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:44:53.533314    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:44:53.544268    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:44:53.544287    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:44:53.544292    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:44:53.556549    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:44:53.556584    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:44:53.568574    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:44:53.568585    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:44:53.603893    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:44:53.603906    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:44:53.616753    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:44:53.616764    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:44:53.630848    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:44:53.630863    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:44:53.654033    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:44:53.654043    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:44:53.666672    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:44:53.666682    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:44:53.683574    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:44:53.683584    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:44:53.695304    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:44:53.695316    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:44:53.718689    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:44:53.718696    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:44:53.745895    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:44:53.745910    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:44:53.761291    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:44:53.761301    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:44:53.772414    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:44:53.772425    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:44:53.794144    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:44:53.794154    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:44:53.798358    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:44:53.798368    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:44:53.812089    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:44:53.812097    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:44:56.329944    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:01.332143    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:01.332413    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:45:01.361277    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:45:01.361432    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:45:01.378488    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:45:01.378591    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:45:01.392035    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:45:01.392122    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:45:01.403564    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:45:01.403646    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:45:01.414785    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:45:01.414858    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:45:01.425059    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:45:01.425143    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:45:01.435423    8956 logs.go:276] 0 containers: []
	W0914 23:45:01.435430    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:45:01.435490    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:45:01.445581    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:45:01.445605    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:45:01.445610    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:45:01.457297    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:45:01.457310    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:45:01.493106    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:45:01.493117    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:45:01.504565    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:45:01.504577    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:45:01.527659    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:45:01.527670    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:45:01.546127    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:45:01.546137    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:45:01.564291    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:45:01.564305    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:45:01.583551    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:45:01.583561    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:45:01.597808    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:45:01.597821    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:45:01.609365    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:45:01.609380    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:45:01.636371    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:45:01.636378    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:45:01.640925    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:45:01.640931    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:45:01.666421    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:45:01.666436    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:45:01.678222    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:45:01.678232    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:45:01.694945    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:45:01.694956    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:45:01.710673    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:45:01.710689    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:45:01.722992    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:45:01.723002    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:45:04.249584    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:09.251894    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:09.252158    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:45:09.279580    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:45:09.279711    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:45:09.297045    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:45:09.297157    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:45:09.310231    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:45:09.310308    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:45:09.323532    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:45:09.323624    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:45:09.334835    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:45:09.334913    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:45:09.345685    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:45:09.345771    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:45:09.356667    8956 logs.go:276] 0 containers: []
	W0914 23:45:09.356677    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:45:09.356748    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:45:09.367110    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:45:09.367135    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:45:09.367141    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:45:09.379014    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:45:09.379027    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:45:09.391688    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:45:09.391699    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:45:09.420951    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:45:09.420959    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:45:09.433562    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:45:09.433575    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:45:09.446933    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:45:09.446943    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:45:09.465503    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:45:09.465517    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:45:09.477209    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:45:09.477222    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:45:09.500867    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:45:09.500880    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:45:09.516525    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:45:09.516537    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:45:09.535782    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:45:09.535791    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:45:09.540198    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:45:09.540206    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:45:09.561575    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:45:09.561588    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:45:09.574418    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:45:09.574430    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:45:09.592710    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:45:09.592723    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:45:09.617762    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:45:09.617779    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:45:09.657534    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:45:09.657546    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:45:12.171783    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:17.174031    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:17.174307    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:45:17.197913    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:45:17.198065    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:45:17.214057    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:45:17.214152    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:45:17.226836    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:45:17.226923    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:45:17.238220    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:45:17.238306    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:45:17.249098    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:45:17.249176    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:45:17.259846    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:45:17.259919    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:45:17.270544    8956 logs.go:276] 0 containers: []
	W0914 23:45:17.270555    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:45:17.270625    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:45:17.281159    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:45:17.281178    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:45:17.281184    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:45:17.304053    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:45:17.304067    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:45:17.321973    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:45:17.321982    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:45:17.346035    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:45:17.346041    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:45:17.374042    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:45:17.374051    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:45:17.378033    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:45:17.378039    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:45:17.392233    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:45:17.392243    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:45:17.426766    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:45:17.426776    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:45:17.440182    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:45:17.440193    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:45:17.451700    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:45:17.451711    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:45:17.463508    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:45:17.463519    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:45:17.474832    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:45:17.474843    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:45:17.486908    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:45:17.486918    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:45:17.501748    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:45:17.501759    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:45:17.523914    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:45:17.523924    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:45:17.536423    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:45:17.536435    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:45:17.551484    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:45:17.551496    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:45:20.076154    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:25.076907    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:25.077358    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:45:25.118700    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:45:25.118875    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:45:25.142915    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:45:25.143042    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:45:25.157633    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:45:25.157723    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:45:25.169574    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:45:25.169679    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:45:25.180240    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:45:25.180311    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:45:25.190889    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:45:25.190971    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:45:25.200991    8956 logs.go:276] 0 containers: []
	W0914 23:45:25.201001    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:45:25.201064    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:45:25.212068    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:45:25.212082    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:45:25.212088    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:45:25.231976    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:45:25.231989    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:45:25.243196    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:45:25.243207    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:45:25.254703    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:45:25.254716    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:45:25.268368    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:45:25.268381    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:45:25.291616    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:45:25.291626    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:45:25.303197    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:45:25.303207    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:45:25.333107    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:45:25.333118    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:45:25.345643    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:45:25.345653    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:45:25.361469    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:45:25.361479    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:45:25.379449    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:45:25.379459    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:45:25.396805    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:45:25.396815    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:45:25.420295    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:45:25.420303    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:45:25.424281    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:45:25.424291    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:45:25.466440    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:45:25.466451    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:45:25.481071    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:45:25.481081    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:45:25.499117    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:45:25.499127    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:45:28.013168    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:33.015533    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:33.016107    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:45:33.059431    8956 logs.go:276] 2 containers: [9a18c39c6c87 b14f8a592eaa]
	I0914 23:45:33.059595    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:45:33.080589    8956 logs.go:276] 2 containers: [099863b623ad 9edbecfd3df2]
	I0914 23:45:33.080726    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:45:33.096205    8956 logs.go:276] 1 containers: [fd43fbdad19c]
	I0914 23:45:33.096300    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:45:33.109096    8956 logs.go:276] 2 containers: [0cbd12a81abc 1faf6553ac06]
	I0914 23:45:33.109190    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:45:33.120295    8956 logs.go:276] 1 containers: [97bb63c97f73]
	I0914 23:45:33.120373    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:45:33.132789    8956 logs.go:276] 2 containers: [7fb0c17c4cdc 10ed9924dc61]
	I0914 23:45:33.132876    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:45:33.143085    8956 logs.go:276] 0 containers: []
	W0914 23:45:33.143100    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:45:33.143169    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:45:33.153651    8956 logs.go:276] 2 containers: [1e5911083cb5 bbe9ac8055ea]
	I0914 23:45:33.153668    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:45:33.153673    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:45:33.180309    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:45:33.180318    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:45:33.214915    8956 logs.go:123] Gathering logs for kube-proxy [97bb63c97f73] ...
	I0914 23:45:33.214925    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97bb63c97f73"
	I0914 23:45:33.230938    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:45:33.230950    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:45:33.253361    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:45:33.253369    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:45:33.257374    8956 logs.go:123] Gathering logs for kube-scheduler [0cbd12a81abc] ...
	I0914 23:45:33.257381    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cbd12a81abc"
	I0914 23:45:33.284982    8956 logs.go:123] Gathering logs for kube-scheduler [1faf6553ac06] ...
	I0914 23:45:33.284992    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1faf6553ac06"
	I0914 23:45:33.309991    8956 logs.go:123] Gathering logs for storage-provisioner [1e5911083cb5] ...
	I0914 23:45:33.310001    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e5911083cb5"
	I0914 23:45:33.322000    8956 logs.go:123] Gathering logs for kube-apiserver [9a18c39c6c87] ...
	I0914 23:45:33.322011    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a18c39c6c87"
	I0914 23:45:33.336258    8956 logs.go:123] Gathering logs for storage-provisioner [bbe9ac8055ea] ...
	I0914 23:45:33.336268    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe9ac8055ea"
	I0914 23:45:33.347883    8956 logs.go:123] Gathering logs for kube-controller-manager [7fb0c17c4cdc] ...
	I0914 23:45:33.347894    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb0c17c4cdc"
	I0914 23:45:33.365961    8956 logs.go:123] Gathering logs for kube-controller-manager [10ed9924dc61] ...
	I0914 23:45:33.365973    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ed9924dc61"
	I0914 23:45:33.384161    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:45:33.384171    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:45:33.396442    8956 logs.go:123] Gathering logs for kube-apiserver [b14f8a592eaa] ...
	I0914 23:45:33.396452    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b14f8a592eaa"
	I0914 23:45:33.409504    8956 logs.go:123] Gathering logs for etcd [099863b623ad] ...
	I0914 23:45:33.409520    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 099863b623ad"
	I0914 23:45:33.428217    8956 logs.go:123] Gathering logs for etcd [9edbecfd3df2] ...
	I0914 23:45:33.428227    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9edbecfd3df2"
	I0914 23:45:33.442764    8956 logs.go:123] Gathering logs for coredns [fd43fbdad19c] ...
	I0914 23:45:33.442773    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd43fbdad19c"
	I0914 23:45:35.956055    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:40.958362    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:40.958635    8956 kubeadm.go:597] duration metric: took 4m3.64337675s to restartPrimaryControlPlane
	W0914 23:45:40.958760    8956 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 23:45:40.958801    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0914 23:45:41.969705    8956 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.010909s)
	I0914 23:45:41.969780    8956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 23:45:41.974802    8956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 23:45:41.977738    8956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 23:45:41.980593    8956 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 23:45:41.980600    8956 kubeadm.go:157] found existing configuration files:
	
	I0914 23:45:41.980628    8956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/admin.conf
	I0914 23:45:41.983845    8956 kubeadm.go:163] "https://control-plane.minikube.internal:51261" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 23:45:41.983878    8956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 23:45:41.987102    8956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/kubelet.conf
	I0914 23:45:41.990024    8956 kubeadm.go:163] "https://control-plane.minikube.internal:51261" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 23:45:41.990048    8956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 23:45:41.992695    8956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/controller-manager.conf
	I0914 23:45:41.995737    8956 kubeadm.go:163] "https://control-plane.minikube.internal:51261" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 23:45:41.995763    8956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 23:45:41.998879    8956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/scheduler.conf
	I0914 23:45:42.001394    8956 kubeadm.go:163] "https://control-plane.minikube.internal:51261" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51261 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 23:45:42.001422    8956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 23:45:42.004123    8956 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 23:45:42.022997    8956 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0914 23:45:42.023029    8956 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 23:45:42.075339    8956 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 23:45:42.075394    8956 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 23:45:42.075440    8956 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 23:45:42.123962    8956 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 23:45:42.128304    8956 out.go:235]   - Generating certificates and keys ...
	I0914 23:45:42.128347    8956 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 23:45:42.128389    8956 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 23:45:42.128433    8956 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 23:45:42.128469    8956 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 23:45:42.128512    8956 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 23:45:42.128539    8956 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 23:45:42.128578    8956 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 23:45:42.128607    8956 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 23:45:42.128640    8956 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 23:45:42.128677    8956 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 23:45:42.128696    8956 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 23:45:42.128729    8956 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 23:45:42.386689    8956 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 23:45:42.521657    8956 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 23:45:42.574615    8956 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 23:45:42.745170    8956 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 23:45:42.776403    8956 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 23:45:42.776702    8956 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 23:45:42.776731    8956 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 23:45:42.845138    8956 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 23:45:42.850063    8956 out.go:235]   - Booting up control plane ...
	I0914 23:45:42.850113    8956 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 23:45:42.850178    8956 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 23:45:42.850222    8956 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 23:45:42.850279    8956 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 23:45:42.850353    8956 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 23:45:47.350217    8956 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501765 seconds
	I0914 23:45:47.350278    8956 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 23:45:47.354438    8956 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 23:45:47.865209    8956 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 23:45:47.865412    8956 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-438000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 23:45:48.369910    8956 kubeadm.go:310] [bootstrap-token] Using token: bbd4ls.6ujjfp6cj079ummm
	I0914 23:45:48.376220    8956 out.go:235]   - Configuring RBAC rules ...
	I0914 23:45:48.376292    8956 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 23:45:48.376357    8956 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 23:45:48.378084    8956 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 23:45:48.382867    8956 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 23:45:48.383860    8956 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 23:45:48.384647    8956 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 23:45:48.387971    8956 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 23:45:48.527004    8956 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 23:45:48.774337    8956 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 23:45:48.774802    8956 kubeadm.go:310] 
	I0914 23:45:48.774835    8956 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 23:45:48.774841    8956 kubeadm.go:310] 
	I0914 23:45:48.774882    8956 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 23:45:48.774885    8956 kubeadm.go:310] 
	I0914 23:45:48.774897    8956 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 23:45:48.774944    8956 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 23:45:48.774980    8956 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 23:45:48.774984    8956 kubeadm.go:310] 
	I0914 23:45:48.775019    8956 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 23:45:48.775022    8956 kubeadm.go:310] 
	I0914 23:45:48.775047    8956 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 23:45:48.775049    8956 kubeadm.go:310] 
	I0914 23:45:48.775082    8956 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 23:45:48.775118    8956 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 23:45:48.775150    8956 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 23:45:48.775153    8956 kubeadm.go:310] 
	I0914 23:45:48.775202    8956 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 23:45:48.775245    8956 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 23:45:48.775250    8956 kubeadm.go:310] 
	I0914 23:45:48.775294    8956 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bbd4ls.6ujjfp6cj079ummm \
	I0914 23:45:48.775354    8956 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3496b266fd1cfe9142221ef290f09745f4c6a279684c03f4e3160434112e5d40 \
	I0914 23:45:48.775364    8956 kubeadm.go:310] 	--control-plane 
	I0914 23:45:48.775368    8956 kubeadm.go:310] 
	I0914 23:45:48.775415    8956 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 23:45:48.775419    8956 kubeadm.go:310] 
	I0914 23:45:48.775466    8956 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bbd4ls.6ujjfp6cj079ummm \
	I0914 23:45:48.775525    8956 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3496b266fd1cfe9142221ef290f09745f4c6a279684c03f4e3160434112e5d40 
	I0914 23:45:48.775686    8956 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 23:45:48.775735    8956 cni.go:84] Creating CNI manager for ""
	I0914 23:45:48.775745    8956 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:45:48.779043    8956 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 23:45:48.785970    8956 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 23:45:48.789127    8956 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 23:45:48.793719    8956 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 23:45:48.793808    8956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-438000 minikube.k8s.io/updated_at=2024_09_14T23_45_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a minikube.k8s.io/name=stopped-upgrade-438000 minikube.k8s.io/primary=true
	I0914 23:45:48.793849    8956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:45:48.796803    8956 ops.go:34] apiserver oom_adj: -16
	I0914 23:45:48.840233    8956 kubeadm.go:1113] duration metric: took 46.488ms to wait for elevateKubeSystemPrivileges
	I0914 23:45:48.840257    8956 kubeadm.go:394] duration metric: took 4m11.538652625s to StartCluster
	I0914 23:45:48.840271    8956 settings.go:142] acquiring lock: {Name:mk03c42e45b73d6f59721a178a8a31fc79d22668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:45:48.840427    8956 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:45:48.840843    8956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/kubeconfig: {Name:mke334fd43bb51604954449e74caf7f81dee5b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:45:48.841061    8956 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:45:48.841095    8956 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 23:45:48.841137    8956 config.go:182] Loaded profile config "stopped-upgrade-438000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 23:45:48.841140    8956 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-438000"
	I0914 23:45:48.841148    8956 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-438000"
	W0914 23:45:48.841151    8956 addons.go:243] addon storage-provisioner should already be in state true
	I0914 23:45:48.841164    8956 host.go:66] Checking if "stopped-upgrade-438000" exists ...
	I0914 23:45:48.841171    8956 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-438000"
	I0914 23:45:48.841184    8956 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-438000"
	I0914 23:45:48.845082    8956 out.go:177] * Verifying Kubernetes components...
	I0914 23:45:48.845713    8956 kapi.go:59] client config for stopped-upgrade-438000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/stopped-upgrade-438000/client.key", CAFile:"/Users/jenkins/minikube-integration/19644-6577/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104949800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 23:45:48.848276    8956 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-438000"
	W0914 23:45:48.848280    8956 addons.go:243] addon default-storageclass should already be in state true
	I0914 23:45:48.848287    8956 host.go:66] Checking if "stopped-upgrade-438000" exists ...
	I0914 23:45:48.848819    8956 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 23:45:48.848824    8956 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 23:45:48.848829    8956 sshutil.go:53] new ssh client: &{IP:localhost Port:51229 SSHKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/stopped-upgrade-438000/id_rsa Username:docker}
	I0914 23:45:48.850956    8956 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:45:48.852277    8956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:45:48.855077    8956 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 23:45:48.855093    8956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 23:45:48.855107    8956 sshutil.go:53] new ssh client: &{IP:localhost Port:51229 SSHKeyPath:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/stopped-upgrade-438000/id_rsa Username:docker}
	I0914 23:45:48.924370    8956 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 23:45:48.929653    8956 api_server.go:52] waiting for apiserver process to appear ...
	I0914 23:45:48.929702    8956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:45:48.934373    8956 api_server.go:72] duration metric: took 93.30125ms to wait for apiserver process to appear ...
	I0914 23:45:48.934382    8956 api_server.go:88] waiting for apiserver healthz status ...
	I0914 23:45:48.934391    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:48.939422    8956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 23:45:48.955804    8956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 23:45:49.297197    8956 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0914 23:45:49.297209    8956 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0914 23:45:53.936359    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:53.936377    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:45:58.936502    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:45:58.936535    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:03.936771    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:03.936811    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:08.937158    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:08.937180    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:13.937601    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:13.937643    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:18.938037    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:18.938099    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0914 23:46:19.298938    8956 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0914 23:46:19.303568    8956 out.go:177] * Enabled addons: storage-provisioner
	I0914 23:46:19.311415    8956 addons.go:510] duration metric: took 30.470910625s for enable addons: enabled=[storage-provisioner]
	I0914 23:46:23.939335    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:23.939373    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:28.940450    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:28.940486    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:33.941698    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:33.941739    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:38.942037    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:38.942079    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:43.943607    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:43.943629    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:48.945724    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:48.945841    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:46:48.957863    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:46:48.957947    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:46:48.967968    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:46:48.968052    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:46:48.978470    8956 logs.go:276] 2 containers: [f166433f26b6 1f072a646288]
	I0914 23:46:48.978556    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:46:48.989446    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:46:48.989528    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:46:48.999667    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:46:48.999751    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:46:49.010326    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:46:49.010408    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:46:49.020064    8956 logs.go:276] 0 containers: []
	W0914 23:46:49.020074    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:46:49.020135    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:46:49.030469    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:46:49.030487    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:46:49.030493    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:46:49.066850    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:46:49.066862    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:46:49.081616    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:46:49.081635    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:46:49.093901    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:46:49.093912    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:46:49.106202    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:46:49.106212    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:46:49.123899    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:46:49.123908    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:46:49.135437    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:46:49.135447    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:46:49.165675    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:46:49.165686    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:46:49.169540    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:46:49.169549    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:46:49.183090    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:46:49.183102    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:46:49.199343    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:46:49.199354    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:46:49.214715    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:46:49.214730    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:46:49.226184    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:46:49.226197    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:46:51.753646    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:46:56.756146    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:46:56.756342    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:46:56.771047    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:46:56.771141    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:46:56.782922    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:46:56.783010    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:46:56.794238    8956 logs.go:276] 2 containers: [f166433f26b6 1f072a646288]
	I0914 23:46:56.794316    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:46:56.805463    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:46:56.805545    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:46:56.815902    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:46:56.815972    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:46:56.826794    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:46:56.826872    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:46:56.838071    8956 logs.go:276] 0 containers: []
	W0914 23:46:56.838084    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:46:56.838154    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:46:56.849841    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:46:56.849856    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:46:56.849861    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:46:56.867788    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:46:56.867797    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:46:56.879922    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:46:56.879933    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:46:56.884401    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:46:56.884411    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:46:56.898816    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:46:56.898825    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:46:56.914789    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:46:56.914805    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:46:56.930275    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:46:56.930286    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:46:56.942698    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:46:56.942708    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:46:56.968243    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:46:56.968254    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:46:56.979683    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:46:56.979693    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:46:57.010045    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:46:57.010053    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:46:57.045029    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:46:57.045045    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:46:57.057731    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:46:57.057741    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:46:59.571854    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:04.574179    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:04.574416    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:04.594930    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:47:04.595069    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:04.614185    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:47:04.614276    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:04.625749    8956 logs.go:276] 2 containers: [f166433f26b6 1f072a646288]
	I0914 23:47:04.625831    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:04.636177    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:47:04.636261    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:04.646321    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:47:04.646405    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:04.656556    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:47:04.656633    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:04.666806    8956 logs.go:276] 0 containers: []
	W0914 23:47:04.666825    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:04.666899    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:04.677294    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:47:04.677308    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:47:04.677314    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:04.688375    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:04.688386    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:04.692614    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:47:04.692620    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:47:04.706305    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:47:04.706316    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:47:04.718768    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:47:04.718778    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:47:04.732843    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:04.732857    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:04.757501    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:47:04.757511    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:47:04.769141    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:47:04.769152    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:47:04.786719    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:04.786729    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:04.819324    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:04.819337    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:04.852778    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:47:04.852789    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:47:04.867068    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:47:04.867082    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:47:04.880792    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:47:04.880805    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:47:07.399590    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:12.404702    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:12.404848    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:12.419057    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:47:12.419141    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:12.430333    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:47:12.430429    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:12.441206    8956 logs.go:276] 2 containers: [f166433f26b6 1f072a646288]
	I0914 23:47:12.441290    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:12.452165    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:47:12.452252    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:12.463125    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:47:12.463212    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:12.474032    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:47:12.474117    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:12.484122    8956 logs.go:276] 0 containers: []
	W0914 23:47:12.484133    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:12.484204    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:12.494648    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:47:12.494663    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:47:12.494668    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:47:12.506270    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:47:12.506280    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:47:12.524119    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:47:12.524128    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:47:12.535718    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:47:12.535728    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:47:12.550740    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:47:12.550750    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:47:12.563026    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:12.563036    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:12.595123    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:12.595131    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:12.599291    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:12.599299    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:12.640494    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:47:12.640515    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:47:12.655367    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:47:12.655377    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:47:12.669910    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:47:12.669920    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:47:12.681330    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:12.681341    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:12.704919    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:47:12.704927    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:15.221432    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:20.228374    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:20.228474    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:20.239274    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:47:20.239370    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:20.250846    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:47:20.250930    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:20.261865    8956 logs.go:276] 2 containers: [f166433f26b6 1f072a646288]
	I0914 23:47:20.261952    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:20.272159    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:47:20.272242    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:20.283065    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:47:20.283144    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:20.293382    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:47:20.293453    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:20.303506    8956 logs.go:276] 0 containers: []
	W0914 23:47:20.303517    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:20.303578    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:20.314340    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:47:20.314354    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:47:20.314359    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:47:20.331690    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:47:20.331699    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:47:20.342671    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:20.342681    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:20.373251    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:47:20.373259    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:47:20.387171    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:47:20.387184    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:47:20.404836    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:47:20.404846    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:47:20.416595    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:47:20.416610    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:47:20.432424    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:47:20.432438    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:47:20.443998    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:20.444012    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:20.468766    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:20.468773    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:20.472869    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:20.472874    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:20.507634    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:47:20.507644    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:47:20.519904    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:47:20.519913    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:23.035156    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:28.040224    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:28.040379    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:28.055530    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:47:28.055628    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:28.066843    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:47:28.066930    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:28.077719    8956 logs.go:276] 2 containers: [f166433f26b6 1f072a646288]
	I0914 23:47:28.077802    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:28.088393    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:47:28.088475    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:28.099540    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:47:28.099630    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:28.110220    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:47:28.110300    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:28.119804    8956 logs.go:276] 0 containers: []
	W0914 23:47:28.119814    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:28.119877    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:28.132262    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:47:28.132279    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:47:28.132285    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:47:28.146297    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:47:28.146307    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:47:28.161091    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:28.161102    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:28.184889    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:28.184896    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:28.214592    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:28.214603    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:28.250194    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:47:28.250204    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:47:28.264217    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:47:28.264227    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:47:28.275632    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:47:28.275643    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:47:28.292959    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:47:28.292968    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:47:28.304359    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:47:28.304370    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:28.315871    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:28.315880    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:28.320726    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:47:28.320733    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:47:28.335477    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:47:28.335488    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:47:30.850837    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:35.854936    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:35.855102    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:35.870319    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:47:35.870418    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:35.881290    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:47:35.881379    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:35.891692    8956 logs.go:276] 2 containers: [f166433f26b6 1f072a646288]
	I0914 23:47:35.891775    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:35.904132    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:47:35.904208    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:35.914889    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:47:35.914972    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:35.925345    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:47:35.925423    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:35.935737    8956 logs.go:276] 0 containers: []
	W0914 23:47:35.935754    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:35.935832    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:35.947001    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:47:35.947015    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:47:35.947020    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:47:35.962224    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:35.962240    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:35.966886    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:35.966893    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:36.022718    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:47:36.022730    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:47:36.037381    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:47:36.037392    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:47:36.055626    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:47:36.055643    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:47:36.067579    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:47:36.067593    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:47:36.082684    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:47:36.082695    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:47:36.100363    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:47:36.100374    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:36.112372    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:36.112382    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:36.144130    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:47:36.144143    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:47:36.155971    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:47:36.155981    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:47:36.169922    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:36.169934    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:38.697490    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:43.700794    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:43.701005    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:43.718713    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:47:43.718837    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:43.735370    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:47:43.735465    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:43.748263    8956 logs.go:276] 2 containers: [f166433f26b6 1f072a646288]
	I0914 23:47:43.748348    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:43.759461    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:47:43.759539    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:43.777556    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:47:43.777643    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:43.788267    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:47:43.788350    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:43.798484    8956 logs.go:276] 0 containers: []
	W0914 23:47:43.798495    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:43.798568    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:43.813382    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:47:43.813399    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:47:43.813404    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:47:43.828775    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:47:43.828787    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:47:43.840385    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:47:43.840397    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:47:43.858864    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:43.858875    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:43.884461    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:43.884468    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:43.915771    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:43.915779    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:43.954537    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:47:43.954548    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:47:43.969790    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:47:43.969828    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:47:43.984254    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:47:43.984265    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:43.995822    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:43.995834    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:44.000699    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:47:44.000705    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:47:44.014151    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:47:44.014161    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:47:44.026053    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:47:44.026067    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:47:46.548669    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:51.551579    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:51.551768    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:51.564272    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:47:51.564364    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:51.574851    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:47:51.574935    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:51.585371    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:47:51.585460    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:51.595793    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:47:51.595875    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:51.606044    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:47:51.606126    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:51.617318    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:47:51.617405    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:51.633021    8956 logs.go:276] 0 containers: []
	W0914 23:47:51.633032    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:51.633106    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:51.643944    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:47:51.643961    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:47:51.643966    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:47:51.658529    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:47:51.658539    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:47:51.670294    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:47:51.670307    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:47:51.682056    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:51.682065    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:51.686419    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:47:51.686429    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:47:51.701746    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:47:51.701760    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:47:51.713218    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:47:51.713229    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:47:51.724899    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:47:51.724913    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:47:51.744034    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:47:51.744042    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:47:51.760592    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:47:51.760602    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:47:51.772361    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:47:51.772375    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:47:51.789790    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:51.789799    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:51.820475    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:51.820483    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:51.856433    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:47:51.856447    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:47:51.868075    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:51.868089    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:54.394928    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:47:59.397527    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:47:59.397730    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:47:59.419825    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:47:59.419922    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:47:59.431641    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:47:59.431724    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:47:59.442054    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:47:59.442144    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:47:59.452570    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:47:59.452653    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:47:59.463602    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:47:59.463681    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:47:59.473957    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:47:59.474026    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:47:59.485058    8956 logs.go:276] 0 containers: []
	W0914 23:47:59.485073    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:47:59.485149    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:47:59.495109    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:47:59.495129    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:47:59.495134    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:47:59.506232    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:47:59.506245    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:47:59.523635    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:47:59.523645    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:47:59.558572    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:47:59.558583    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:47:59.573040    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:47:59.573054    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:47:59.585032    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:47:59.585043    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:47:59.600388    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:47:59.600398    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:47:59.625433    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:47:59.625441    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:47:59.630225    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:47:59.630233    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:47:59.644506    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:47:59.644516    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:47:59.657440    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:47:59.657452    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:47:59.669448    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:47:59.669459    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:47:59.700607    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:47:59.700615    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:47:59.711971    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:47:59.711982    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:47:59.724237    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:47:59.724247    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:02.236229    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:07.238781    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:07.238945    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:07.255381    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:48:07.255482    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:07.267871    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:48:07.267961    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:07.279006    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:48:07.279081    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:07.289453    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:48:07.289535    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:07.301450    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:48:07.301531    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:07.312089    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:48:07.312164    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:07.322675    8956 logs.go:276] 0 containers: []
	W0914 23:48:07.322687    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:07.322750    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:07.334888    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:48:07.334916    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:48:07.334923    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:48:07.350670    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:48:07.350685    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:07.362702    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:48:07.362714    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:48:07.384414    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:07.384424    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:07.419354    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:48:07.419363    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:48:07.431043    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:48:07.431054    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:48:07.443133    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:07.443145    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:07.468290    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:07.468299    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:07.498276    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:48:07.498283    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:48:07.509873    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:48:07.509886    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:48:07.521349    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:07.521359    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:07.525955    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:48:07.525961    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:48:07.541641    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:48:07.541654    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:48:07.553549    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:48:07.553560    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:48:07.571475    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:48:07.571485    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:48:10.087782    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:15.090198    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:15.090322    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:15.102991    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:48:15.103081    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:15.113760    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:48:15.113841    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:15.124636    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:48:15.124723    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:15.135198    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:48:15.135280    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:15.146023    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:48:15.146112    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:15.156548    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:48:15.156634    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:15.166797    8956 logs.go:276] 0 containers: []
	W0914 23:48:15.166807    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:15.166872    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:15.177629    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:48:15.177651    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:48:15.177657    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:48:15.189448    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:48:15.189458    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:48:15.201439    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:15.201450    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:15.234079    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:48:15.234091    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:48:15.255369    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:48:15.255384    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:48:15.267098    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:15.267111    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:15.292566    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:48:15.292574    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:15.303968    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:48:15.303979    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:48:15.318252    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:15.318263    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:15.356903    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:48:15.356916    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:48:15.369095    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:15.369105    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:15.373704    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:48:15.373710    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:48:15.389938    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:48:15.389949    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:48:15.401560    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:48:15.401571    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:48:15.427073    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:48:15.427091    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:48:17.943073    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:22.943652    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:22.943877    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:22.961752    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:48:22.961846    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:22.972000    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:48:22.972081    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:22.982674    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:48:22.982761    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:22.993641    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:48:22.993720    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:23.016370    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:48:23.016452    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:23.031412    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:48:23.031495    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:23.041677    8956 logs.go:276] 0 containers: []
	W0914 23:48:23.041689    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:23.041757    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:23.051893    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:48:23.051909    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:48:23.051914    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:48:23.067955    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:23.067966    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:23.093547    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:23.093555    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:23.124680    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:23.124690    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:23.159823    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:23.159832    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:23.164135    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:48:23.164142    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:23.175565    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:48:23.175576    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:48:23.187656    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:48:23.187665    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:48:23.202838    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:48:23.202848    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:48:23.214440    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:48:23.214451    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:48:23.229321    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:48:23.229329    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:48:23.243856    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:48:23.243870    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:48:23.256302    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:48:23.256310    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:48:23.274186    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:48:23.274195    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:48:23.286016    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:48:23.286026    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:48:25.800703    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:30.803035    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:30.803219    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:30.819687    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:48:30.819799    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:30.833041    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:48:30.833133    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:30.844503    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:48:30.844580    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:30.854688    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:48:30.854776    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:30.865772    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:48:30.865855    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:30.876593    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:48:30.876679    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:30.887264    8956 logs.go:276] 0 containers: []
	W0914 23:48:30.887273    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:30.887336    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:30.897767    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:48:30.897785    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:30.897791    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:30.930729    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:30.930743    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:30.935523    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:30.935530    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:30.973015    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:48:30.973026    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:48:30.985123    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:48:30.985136    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:48:30.997382    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:30.997392    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:31.021697    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:48:31.021704    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:48:31.036031    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:48:31.036042    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:48:31.055314    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:48:31.055323    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:48:31.067237    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:48:31.067249    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:48:31.082036    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:48:31.082053    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:48:31.093745    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:48:31.093760    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:48:31.105192    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:48:31.105206    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:48:31.121299    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:48:31.121308    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:48:31.138202    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:48:31.138212    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:33.652150    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:38.654430    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:38.654658    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:38.673469    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:48:38.673582    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:38.688135    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:48:38.688221    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:38.700475    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:48:38.700557    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:38.710923    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:48:38.710991    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:38.721711    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:48:38.721798    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:38.732302    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:48:38.732377    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:38.742580    8956 logs.go:276] 0 containers: []
	W0914 23:48:38.742591    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:38.742660    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:38.753316    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:48:38.753333    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:48:38.753340    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:48:38.764717    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:48:38.764727    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:48:38.781767    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:38.781781    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:38.806005    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:48:38.806014    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:48:38.817739    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:48:38.817754    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:38.834351    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:38.834366    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:38.841173    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:48:38.841183    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:48:38.855495    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:48:38.855506    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:48:38.867736    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:48:38.867746    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:48:38.887471    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:48:38.887484    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:48:38.900350    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:38.900362    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:38.932018    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:38.932033    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:38.969082    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:48:38.969092    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:48:38.980979    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:48:38.980989    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:48:38.995159    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:48:38.995172    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:48:41.510744    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:46.512997    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:46.513160    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:46.524713    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:48:46.524801    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:46.535680    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:48:46.535770    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:46.546427    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:48:46.546511    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:46.557255    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:48:46.557338    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:46.568045    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:48:46.568125    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:46.579115    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:48:46.579190    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:46.589385    8956 logs.go:276] 0 containers: []
	W0914 23:48:46.589399    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:46.589462    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:46.600951    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:48:46.600968    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:48:46.600972    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:48:46.612533    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:48:46.612544    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:48:46.623883    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:48:46.623893    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:48:46.639579    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:48:46.639593    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:48:46.653767    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:48:46.653780    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:48:46.665272    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:46.665282    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:46.699459    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:46.699470    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:46.703705    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:48:46.703711    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:48:46.715197    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:48:46.715207    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:48:46.727325    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:48:46.727335    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:48:46.745181    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:46.745190    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:46.770703    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:46.770710    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:46.801948    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:48:46.801955    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:48:46.814205    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:48:46.814218    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:46.826408    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:48:46.826420    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:48:49.348395    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:48:54.350572    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:48:54.350809    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:48:54.371570    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:48:54.371690    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:48:54.386819    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:48:54.386918    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:48:54.399271    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:48:54.399352    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:48:54.410140    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:48:54.410219    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:48:54.420800    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:48:54.420875    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:48:54.431124    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:48:54.431205    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:48:54.441729    8956 logs.go:276] 0 containers: []
	W0914 23:48:54.441741    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:48:54.441805    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:48:54.453098    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:48:54.453115    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:48:54.453123    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:48:54.471932    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:48:54.471943    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:48:54.483329    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:48:54.483339    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:48:54.498296    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:48:54.498308    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:48:54.530458    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:48:54.530467    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:48:54.534962    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:48:54.534970    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:48:54.550807    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:48:54.550816    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:48:54.563052    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:48:54.563064    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:48:54.575436    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:48:54.575446    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:48:54.600605    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:48:54.600613    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:48:54.611906    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:48:54.611915    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:48:54.648189    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:48:54.648200    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:48:54.660134    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:48:54.660145    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:48:54.672779    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:48:54.672790    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:48:54.690984    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:48:54.690995    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:48:57.205279    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:02.207409    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:02.207541    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:02.225181    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:49:02.225276    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:02.236622    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:49:02.236707    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:02.247337    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:49:02.247429    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:02.258807    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:49:02.258887    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:02.276157    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:49:02.276236    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:02.286234    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:49:02.286310    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:02.296943    8956 logs.go:276] 0 containers: []
	W0914 23:49:02.296957    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:02.297033    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:02.307756    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:49:02.307773    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:02.307779    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:02.338026    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:02.338036    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:02.362061    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:49:02.362071    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:49:02.376954    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:49:02.376967    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:49:02.391084    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:49:02.391099    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:49:02.405772    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:49:02.405782    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:49:02.417196    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:49:02.417211    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:49:02.429052    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:02.429063    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:02.433509    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:02.433517    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:02.468363    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:49:02.468374    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:49:02.479952    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:49:02.479962    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:49:02.495347    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:49:02.495356    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:49:02.513049    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:49:02.513058    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:02.524683    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:49:02.524693    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:49:02.540935    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:49:02.540950    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:49:05.054437    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:10.056745    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:10.056861    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:10.068075    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:49:10.068163    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:10.080702    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:49:10.080787    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:10.092084    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:49:10.092166    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:10.102841    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:49:10.102923    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:10.113658    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:49:10.113742    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:10.124170    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:49:10.124244    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:10.134400    8956 logs.go:276] 0 containers: []
	W0914 23:49:10.134412    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:10.134480    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:10.148402    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:49:10.148418    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:49:10.148423    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:49:10.160113    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:49:10.160126    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:49:10.176609    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:49:10.176619    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:49:10.193577    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:49:10.193586    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:49:10.207811    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:49:10.207825    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:49:10.221857    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:10.221866    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:10.226761    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:49:10.226768    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:49:10.238904    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:49:10.238915    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:49:10.250951    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:10.250961    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:10.276443    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:49:10.276465    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:10.289427    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:49:10.289437    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:49:10.301382    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:49:10.301393    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:49:10.313484    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:49:10.313497    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:49:10.325784    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:10.325794    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:10.356671    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:10.356678    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:12.892081    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:17.894399    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:17.894636    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:17.911777    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:49:17.911884    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:17.925293    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:49:17.925384    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:17.936562    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:49:17.936641    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:17.947547    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:49:17.947630    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:17.960031    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:49:17.960113    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:17.970774    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:49:17.970860    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:17.981192    8956 logs.go:276] 0 containers: []
	W0914 23:49:17.981202    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:17.981270    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:17.992747    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:49:17.992763    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:17.992770    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:18.023707    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:49:18.023714    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:49:18.038289    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:49:18.038303    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:49:18.052599    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:49:18.052609    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:49:18.065851    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:18.065863    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:18.102071    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:49:18.102087    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:49:18.114445    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:49:18.114455    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:49:18.126768    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:49:18.126779    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:49:18.138818    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:49:18.138830    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:49:18.154919    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:49:18.154927    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:49:18.172863    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:18.172873    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:18.177076    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:49:18.177084    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:49:18.192093    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:49:18.192103    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:49:18.204249    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:18.204258    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:18.228689    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:49:18.228699    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:20.743034    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:25.745317    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:25.745562    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:25.763985    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:49:25.764093    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:25.779429    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:49:25.779508    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:25.803701    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:49:25.803795    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:25.816458    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:49:25.816547    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:25.832401    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:49:25.832481    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:25.842832    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:49:25.842918    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:25.854033    8956 logs.go:276] 0 containers: []
	W0914 23:49:25.854046    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:25.854119    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:25.864643    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:49:25.864659    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:49:25.864665    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:25.876599    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:25.876610    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:25.911692    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:49:25.911702    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:49:25.925932    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:49:25.925948    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:49:25.937430    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:49:25.937444    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:49:25.949533    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:49:25.949545    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:49:25.964004    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:49:25.964018    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:49:25.978427    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:49:25.978439    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:49:25.997146    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:25.997157    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:26.020556    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:26.020563    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:26.051674    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:49:26.051687    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:49:26.063922    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:49:26.063933    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:49:26.079843    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:26.079853    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:26.084095    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:49:26.084103    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:49:26.096152    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:49:26.096163    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:49:28.615137    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:33.616771    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:33.616932    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:33.640772    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:49:33.640866    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:33.655503    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:49:33.655592    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:33.666244    8956 logs.go:276] 4 containers: [a1372db1fd0a e8c5d4d78795 f166433f26b6 1f072a646288]
	I0914 23:49:33.666332    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:33.676819    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:49:33.676900    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:33.687982    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:49:33.688062    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:33.702282    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:49:33.702360    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:33.712933    8956 logs.go:276] 0 containers: []
	W0914 23:49:33.712947    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:33.713016    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:33.723692    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:49:33.723710    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:33.723715    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:33.728338    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:33.728345    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:33.763391    8956 logs.go:123] Gathering logs for coredns [f166433f26b6] ...
	I0914 23:49:33.763402    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f166433f26b6"
	I0914 23:49:33.775602    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:49:33.775615    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:49:33.788053    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:49:33.788065    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:49:33.799706    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:49:33.799715    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:49:33.811155    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:33.811169    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:33.841692    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:49:33.841700    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:49:33.855450    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:49:33.855461    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:49:33.867249    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:49:33.867260    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:33.878588    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:49:33.878600    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:49:33.893060    8956 logs.go:123] Gathering logs for coredns [1f072a646288] ...
	I0914 23:49:33.893075    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f072a646288"
	I0914 23:49:33.905163    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:49:33.905172    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:49:33.928759    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:49:33.928770    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:49:33.950027    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:33.950037    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:36.478600    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:41.480745    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:41.480857    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 23:49:41.492144    8956 logs.go:276] 1 containers: [c5f1a09efc92]
	I0914 23:49:41.492223    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 23:49:41.502824    8956 logs.go:276] 1 containers: [4fd0c23b9b01]
	I0914 23:49:41.502907    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 23:49:41.513655    8956 logs.go:276] 4 containers: [5e1f1acf344a 28fb6b188a11 a1372db1fd0a e8c5d4d78795]
	I0914 23:49:41.513758    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 23:49:41.524028    8956 logs.go:276] 1 containers: [bfda0a5cb9c6]
	I0914 23:49:41.524099    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 23:49:41.534551    8956 logs.go:276] 1 containers: [d4182f831fd2]
	I0914 23:49:41.534625    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 23:49:41.545663    8956 logs.go:276] 1 containers: [1438530d8647]
	I0914 23:49:41.545738    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 23:49:41.555541    8956 logs.go:276] 0 containers: []
	W0914 23:49:41.555553    8956 logs.go:278] No container was found matching "kindnet"
	I0914 23:49:41.555627    8956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 23:49:41.565556    8956 logs.go:276] 1 containers: [7b4151abb66d]
	I0914 23:49:41.565575    8956 logs.go:123] Gathering logs for dmesg ...
	I0914 23:49:41.565581    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:49:41.570634    8956 logs.go:123] Gathering logs for kube-apiserver [c5f1a09efc92] ...
	I0914 23:49:41.570642    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f1a09efc92"
	I0914 23:49:41.585055    8956 logs.go:123] Gathering logs for coredns [5e1f1acf344a] ...
	I0914 23:49:41.585065    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1f1acf344a"
	I0914 23:49:41.596356    8956 logs.go:123] Gathering logs for kube-proxy [d4182f831fd2] ...
	I0914 23:49:41.596370    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4182f831fd2"
	I0914 23:49:41.607958    8956 logs.go:123] Gathering logs for kube-controller-manager [1438530d8647] ...
	I0914 23:49:41.607969    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1438530d8647"
	I0914 23:49:41.625559    8956 logs.go:123] Gathering logs for kubelet ...
	I0914 23:49:41.625569    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:49:41.657447    8956 logs.go:123] Gathering logs for coredns [28fb6b188a11] ...
	I0914 23:49:41.657456    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fb6b188a11"
	I0914 23:49:41.668837    8956 logs.go:123] Gathering logs for coredns [e8c5d4d78795] ...
	I0914 23:49:41.668852    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8c5d4d78795"
	I0914 23:49:41.680809    8956 logs.go:123] Gathering logs for Docker ...
	I0914 23:49:41.680819    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 23:49:41.706031    8956 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:49:41.706047    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 23:49:41.740901    8956 logs.go:123] Gathering logs for etcd [4fd0c23b9b01] ...
	I0914 23:49:41.740912    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fd0c23b9b01"
	I0914 23:49:41.758515    8956 logs.go:123] Gathering logs for coredns [a1372db1fd0a] ...
	I0914 23:49:41.758526    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1372db1fd0a"
	I0914 23:49:41.770560    8956 logs.go:123] Gathering logs for kube-scheduler [bfda0a5cb9c6] ...
	I0914 23:49:41.770571    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfda0a5cb9c6"
	I0914 23:49:41.786617    8956 logs.go:123] Gathering logs for storage-provisioner [7b4151abb66d] ...
	I0914 23:49:41.786627    8956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4151abb66d"
	I0914 23:49:41.798467    8956 logs.go:123] Gathering logs for container status ...
	I0914 23:49:41.798476    8956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:49:44.312343    8956 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 23:49:49.314621    8956 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:49:49.318833    8956 out.go:201] 
	W0914 23:49:49.322926    8956 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0914 23:49:49.322931    8956 out.go:270] * 
	* 
	W0914 23:49:49.323348    8956 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:49:49.334922    8956 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-438000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (585.99s)

                                                
                                    
x
+
TestPause/serial/Start (9.88s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-155000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-155000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.831249833s)

                                                
                                                
-- stdout --
	* [pause-155000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-155000" primary control-plane node in "pause-155000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-155000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-155000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-155000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-155000 -n pause-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-155000 -n pause-155000: exit status 7 (52.386208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-155000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-019000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-019000 --driver=qemu2 : exit status 80 (9.810201125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-019000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-019000" primary control-plane node in "NoKubernetes-019000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-019000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-019000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-019000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-019000 -n NoKubernetes-019000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-019000 -n NoKubernetes-019000: exit status 7 (55.841292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-019000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-019000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-019000 --no-kubernetes --driver=qemu2 : exit status 80 (7.433338667s)

                                                
                                                
-- stdout --
	* [NoKubernetes-019000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-019000
	* Restarting existing qemu2 VM for "NoKubernetes-019000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-019000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-019000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-019000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-019000 -n NoKubernetes-019000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-019000 -n NoKubernetes-019000: exit status 7 (40.425708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-019000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (7.47s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.38s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19644
- KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current750357494/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.38s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.43s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19644
- KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current168966987/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-019000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-019000 --no-kubernetes --driver=qemu2 : exit status 80 (5.230970667s)

                                                
                                                
-- stdout --
	* [NoKubernetes-019000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-019000
	* Restarting existing qemu2 VM for "NoKubernetes-019000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-019000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-019000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-019000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-019000 -n NoKubernetes-019000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-019000 -n NoKubernetes-019000: exit status 7 (30.754417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-019000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-019000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-019000 --driver=qemu2 : exit status 80 (7.146230791s)

                                                
                                                
-- stdout --
	* [NoKubernetes-019000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-019000
	* Restarting existing qemu2 VM for "NoKubernetes-019000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-019000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-019000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-019000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-019000 -n NoKubernetes-019000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-019000 -n NoKubernetes-019000: exit status 7 (53.032708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-019000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (7.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-262000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-262000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.909567375s)

                                                
                                                
-- stdout --
	* [auto-262000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-262000" primary control-plane node in "auto-262000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-262000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:51:30.811754    9379 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:51:30.811888    9379 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:51:30.811894    9379 out.go:358] Setting ErrFile to fd 2...
	I0914 23:51:30.811897    9379 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:51:30.812035    9379 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:51:30.813151    9379 out.go:352] Setting JSON to false
	I0914 23:51:30.829101    9379 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6659,"bootTime":1726376431,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:51:30.829179    9379 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:51:30.835776    9379 out.go:177] * [auto-262000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:51:30.843703    9379 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:51:30.843774    9379 notify.go:220] Checking for updates...
	I0914 23:51:30.848658    9379 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:51:30.851656    9379 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:51:30.854650    9379 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:51:30.857674    9379 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:51:30.860720    9379 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:51:30.862500    9379 config.go:182] Loaded profile config "cert-expiration-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:51:30.862566    9379 config.go:182] Loaded profile config "multinode-053000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:51:30.862618    9379 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:51:30.865606    9379 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 23:51:30.872482    9379 start.go:297] selected driver: qemu2
	I0914 23:51:30.872489    9379 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:51:30.872501    9379 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:51:30.874721    9379 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:51:30.877606    9379 out.go:177] * Automatically selected the socket_vmnet network
	I0914 23:51:30.881702    9379 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:51:30.881728    9379 cni.go:84] Creating CNI manager for ""
	I0914 23:51:30.881753    9379 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:51:30.881758    9379 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 23:51:30.881793    9379 start.go:340] cluster config:
	{Name:auto-262000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:51:30.885477    9379 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:51:30.893629    9379 out.go:177] * Starting "auto-262000" primary control-plane node in "auto-262000" cluster
	I0914 23:51:30.897690    9379 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:51:30.897706    9379 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:51:30.897718    9379 cache.go:56] Caching tarball of preloaded images
	I0914 23:51:30.897777    9379 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:51:30.897783    9379 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:51:30.897849    9379 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/auto-262000/config.json ...
	I0914 23:51:30.897860    9379 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/auto-262000/config.json: {Name:mkab5407ec404e7d4e4fde780bcdd462fabc7a9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:51:30.898075    9379 start.go:360] acquireMachinesLock for auto-262000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:51:30.898108    9379 start.go:364] duration metric: took 27.417µs to acquireMachinesLock for "auto-262000"
	I0914 23:51:30.898118    9379 start.go:93] Provisioning new machine with config: &{Name:auto-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:51:30.898151    9379 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:51:30.905640    9379 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 23:51:30.923570    9379 start.go:159] libmachine.API.Create for "auto-262000" (driver="qemu2")
	I0914 23:51:30.923603    9379 client.go:168] LocalClient.Create starting
	I0914 23:51:30.923671    9379 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:51:30.923701    9379 main.go:141] libmachine: Decoding PEM data...
	I0914 23:51:30.923713    9379 main.go:141] libmachine: Parsing certificate...
	I0914 23:51:30.923750    9379 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:51:30.923774    9379 main.go:141] libmachine: Decoding PEM data...
	I0914 23:51:30.923787    9379 main.go:141] libmachine: Parsing certificate...
	I0914 23:51:30.924151    9379 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:51:31.087872    9379 main.go:141] libmachine: Creating SSH key...
	I0914 23:51:31.254660    9379 main.go:141] libmachine: Creating Disk image...
	I0914 23:51:31.254669    9379 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:51:31.254912    9379 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/auto-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/auto-262000/disk.qcow2
	I0914 23:51:31.264690    9379 main.go:141] libmachine: STDOUT: 
	I0914 23:51:31.264703    9379 main.go:141] libmachine: STDERR: 
	I0914 23:51:31.264764    9379 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/auto-262000/disk.qcow2 +20000M
	I0914 23:51:31.272929    9379 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:51:31.272952    9379 main.go:141] libmachine: STDERR: 
	I0914 23:51:31.272969    9379 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/auto-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/auto-262000/disk.qcow2
	I0914 23:51:31.272973    9379 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:51:31.272983    9379 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:51:31.273012    9379 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/auto-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/auto-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/auto-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:b1:46:a8:39:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/auto-262000/disk.qcow2
	I0914 23:51:31.274682    9379 main.go:141] libmachine: STDOUT: 
	I0914 23:51:31.274695    9379 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:51:31.274716    9379 client.go:171] duration metric: took 351.111167ms to LocalClient.Create
	I0914 23:51:33.276865    9379 start.go:128] duration metric: took 2.378719834s to createHost
	I0914 23:51:33.276936    9379 start.go:83] releasing machines lock for "auto-262000", held for 2.378846417s
	W0914 23:51:33.276989    9379 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:51:33.292141    9379 out.go:177] * Deleting "auto-262000" in qemu2 ...
	W0914 23:51:33.322893    9379 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:51:33.322926    9379 start.go:729] Will try again in 5 seconds ...
	I0914 23:51:38.325011    9379 start.go:360] acquireMachinesLock for auto-262000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:51:38.325515    9379 start.go:364] duration metric: took 408.667µs to acquireMachinesLock for "auto-262000"
	I0914 23:51:38.325624    9379 start.go:93] Provisioning new machine with config: &{Name:auto-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:51:38.325975    9379 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:51:38.337666    9379 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 23:51:38.387571    9379 start.go:159] libmachine.API.Create for "auto-262000" (driver="qemu2")
	I0914 23:51:38.387624    9379 client.go:168] LocalClient.Create starting
	I0914 23:51:38.387730    9379 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:51:38.387796    9379 main.go:141] libmachine: Decoding PEM data...
	I0914 23:51:38.387811    9379 main.go:141] libmachine: Parsing certificate...
	I0914 23:51:38.387880    9379 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:51:38.387923    9379 main.go:141] libmachine: Decoding PEM data...
	I0914 23:51:38.387935    9379 main.go:141] libmachine: Parsing certificate...
	I0914 23:51:38.388482    9379 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:51:38.560169    9379 main.go:141] libmachine: Creating SSH key...
	I0914 23:51:38.623415    9379 main.go:141] libmachine: Creating Disk image...
	I0914 23:51:38.623424    9379 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:51:38.623668    9379 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/auto-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/auto-262000/disk.qcow2
	I0914 23:51:38.632723    9379 main.go:141] libmachine: STDOUT: 
	I0914 23:51:38.632748    9379 main.go:141] libmachine: STDERR: 
	I0914 23:51:38.632820    9379 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/auto-262000/disk.qcow2 +20000M
	I0914 23:51:38.640615    9379 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:51:38.640632    9379 main.go:141] libmachine: STDERR: 
	I0914 23:51:38.640642    9379 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/auto-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/auto-262000/disk.qcow2
	I0914 23:51:38.640648    9379 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:51:38.640657    9379 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:51:38.640690    9379 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/auto-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/auto-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/auto-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:31:67:4a:fa:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/auto-262000/disk.qcow2
	I0914 23:51:38.642330    9379 main.go:141] libmachine: STDOUT: 
	I0914 23:51:38.642350    9379 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:51:38.642363    9379 client.go:171] duration metric: took 254.73775ms to LocalClient.Create
	I0914 23:51:40.644512    9379 start.go:128] duration metric: took 2.318535833s to createHost
	I0914 23:51:40.644585    9379 start.go:83] releasing machines lock for "auto-262000", held for 2.319074917s
	W0914 23:51:40.644949    9379 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:51:40.658769    9379 out.go:201] 
	W0914 23:51:40.665756    9379 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:51:40.665782    9379 out.go:270] * 
	* 
	W0914 23:51:40.668857    9379 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:51:40.678737    9379 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-262000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-262000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.894249416s)

                                                
                                                
-- stdout --
	* [kindnet-262000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-262000" primary control-plane node in "kindnet-262000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-262000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:51:42.935319    9489 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:51:42.935455    9489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:51:42.935458    9489 out.go:358] Setting ErrFile to fd 2...
	I0914 23:51:42.935460    9489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:51:42.935579    9489 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:51:42.936626    9489 out.go:352] Setting JSON to false
	I0914 23:51:42.952819    9489 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6671,"bootTime":1726376431,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:51:42.952895    9489 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:51:42.956484    9489 out.go:177] * [kindnet-262000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:51:42.962696    9489 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:51:42.962745    9489 notify.go:220] Checking for updates...
	I0914 23:51:42.969664    9489 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:51:42.972670    9489 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:51:42.975581    9489 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:51:42.978689    9489 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:51:42.981692    9489 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:51:42.983533    9489 config.go:182] Loaded profile config "cert-expiration-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:51:42.983606    9489 config.go:182] Loaded profile config "multinode-053000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:51:42.983655    9489 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:51:42.987643    9489 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 23:51:42.994542    9489 start.go:297] selected driver: qemu2
	I0914 23:51:42.994550    9489 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:51:42.994558    9489 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:51:42.996780    9489 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:51:43.000654    9489 out.go:177] * Automatically selected the socket_vmnet network
	I0914 23:51:43.003743    9489 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:51:43.003759    9489 cni.go:84] Creating CNI manager for "kindnet"
	I0914 23:51:43.003763    9489 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 23:51:43.003795    9489 start.go:340] cluster config:
	{Name:kindnet-262000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:51:43.007382    9489 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:51:43.015696    9489 out.go:177] * Starting "kindnet-262000" primary control-plane node in "kindnet-262000" cluster
	I0914 23:51:43.019624    9489 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:51:43.019640    9489 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:51:43.019652    9489 cache.go:56] Caching tarball of preloaded images
	I0914 23:51:43.019724    9489 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:51:43.019729    9489 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:51:43.019783    9489 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/kindnet-262000/config.json ...
	I0914 23:51:43.019798    9489 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/kindnet-262000/config.json: {Name:mk8e9d078f839de9c2050a410ed54cc02f19a28c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:51:43.020009    9489 start.go:360] acquireMachinesLock for kindnet-262000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:51:43.020042    9489 start.go:364] duration metric: took 27.791µs to acquireMachinesLock for "kindnet-262000"
	I0914 23:51:43.020053    9489 start.go:93] Provisioning new machine with config: &{Name:kindnet-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:51:43.020086    9489 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:51:43.027640    9489 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 23:51:43.045663    9489 start.go:159] libmachine.API.Create for "kindnet-262000" (driver="qemu2")
	I0914 23:51:43.045696    9489 client.go:168] LocalClient.Create starting
	I0914 23:51:43.045765    9489 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:51:43.045794    9489 main.go:141] libmachine: Decoding PEM data...
	I0914 23:51:43.045809    9489 main.go:141] libmachine: Parsing certificate...
	I0914 23:51:43.045846    9489 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:51:43.045870    9489 main.go:141] libmachine: Decoding PEM data...
	I0914 23:51:43.045880    9489 main.go:141] libmachine: Parsing certificate...
	I0914 23:51:43.046272    9489 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:51:43.208539    9489 main.go:141] libmachine: Creating SSH key...
	I0914 23:51:43.266335    9489 main.go:141] libmachine: Creating Disk image...
	I0914 23:51:43.266341    9489 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:51:43.266574    9489 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kindnet-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kindnet-262000/disk.qcow2
	I0914 23:51:43.275727    9489 main.go:141] libmachine: STDOUT: 
	I0914 23:51:43.275746    9489 main.go:141] libmachine: STDERR: 
	I0914 23:51:43.275803    9489 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kindnet-262000/disk.qcow2 +20000M
	I0914 23:51:43.283619    9489 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:51:43.283635    9489 main.go:141] libmachine: STDERR: 
	I0914 23:51:43.283656    9489 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kindnet-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kindnet-262000/disk.qcow2
	I0914 23:51:43.283660    9489 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:51:43.283672    9489 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:51:43.283708    9489 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kindnet-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kindnet-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kindnet-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:b9:d3:10:54:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kindnet-262000/disk.qcow2
	I0914 23:51:43.285316    9489 main.go:141] libmachine: STDOUT: 
	I0914 23:51:43.285330    9489 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:51:43.285350    9489 client.go:171] duration metric: took 239.651167ms to LocalClient.Create
	I0914 23:51:45.287499    9489 start.go:128] duration metric: took 2.267414542s to createHost
	I0914 23:51:45.287570    9489 start.go:83] releasing machines lock for "kindnet-262000", held for 2.267543333s
	W0914 23:51:45.287620    9489 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:51:45.297839    9489 out.go:177] * Deleting "kindnet-262000" in qemu2 ...
	W0914 23:51:45.330802    9489 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:51:45.330824    9489 start.go:729] Will try again in 5 seconds ...
	I0914 23:51:50.332992    9489 start.go:360] acquireMachinesLock for kindnet-262000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:51:50.333476    9489 start.go:364] duration metric: took 378.708µs to acquireMachinesLock for "kindnet-262000"
	I0914 23:51:50.333587    9489 start.go:93] Provisioning new machine with config: &{Name:kindnet-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:51:50.333831    9489 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:51:50.351610    9489 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 23:51:50.402309    9489 start.go:159] libmachine.API.Create for "kindnet-262000" (driver="qemu2")
	I0914 23:51:50.402381    9489 client.go:168] LocalClient.Create starting
	I0914 23:51:50.402524    9489 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:51:50.402591    9489 main.go:141] libmachine: Decoding PEM data...
	I0914 23:51:50.402611    9489 main.go:141] libmachine: Parsing certificate...
	I0914 23:51:50.402691    9489 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:51:50.402737    9489 main.go:141] libmachine: Decoding PEM data...
	I0914 23:51:50.402751    9489 main.go:141] libmachine: Parsing certificate...
	I0914 23:51:50.403277    9489 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:51:50.574971    9489 main.go:141] libmachine: Creating SSH key...
	I0914 23:51:50.731866    9489 main.go:141] libmachine: Creating Disk image...
	I0914 23:51:50.731872    9489 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:51:50.732133    9489 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kindnet-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kindnet-262000/disk.qcow2
	I0914 23:51:50.741646    9489 main.go:141] libmachine: STDOUT: 
	I0914 23:51:50.741666    9489 main.go:141] libmachine: STDERR: 
	I0914 23:51:50.741719    9489 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kindnet-262000/disk.qcow2 +20000M
	I0914 23:51:50.749639    9489 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:51:50.749655    9489 main.go:141] libmachine: STDERR: 
	I0914 23:51:50.749667    9489 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kindnet-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kindnet-262000/disk.qcow2
	I0914 23:51:50.749671    9489 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:51:50.749681    9489 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:51:50.749708    9489 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kindnet-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kindnet-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kindnet-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:4f:fe:54:f6:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kindnet-262000/disk.qcow2
	I0914 23:51:50.751375    9489 main.go:141] libmachine: STDOUT: 
	I0914 23:51:50.751387    9489 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:51:50.751401    9489 client.go:171] duration metric: took 349.003292ms to LocalClient.Create
	I0914 23:51:52.753548    9489 start.go:128] duration metric: took 2.41971625s to createHost
	I0914 23:51:52.753651    9489 start.go:83] releasing machines lock for "kindnet-262000", held for 2.420147375s
	W0914 23:51:52.753980    9489 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:51:52.768676    9489 out.go:201] 
	W0914 23:51:52.772789    9489 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:51:52.772844    9489 out.go:270] * 
	* 
	W0914 23:51:52.775909    9489 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:51:52.786605    9489 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-262000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-262000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.927512833s)

                                                
                                                
-- stdout --
	* [calico-262000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-262000" primary control-plane node in "calico-262000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-262000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:51:55.162920    9602 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:51:55.163053    9602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:51:55.163056    9602 out.go:358] Setting ErrFile to fd 2...
	I0914 23:51:55.163059    9602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:51:55.163204    9602 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:51:55.164250    9602 out.go:352] Setting JSON to false
	I0914 23:51:55.180506    9602 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6684,"bootTime":1726376431,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:51:55.180576    9602 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:51:55.184706    9602 out.go:177] * [calico-262000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:51:55.192522    9602 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:51:55.192582    9602 notify.go:220] Checking for updates...
	I0914 23:51:55.200446    9602 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:51:55.203539    9602 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:51:55.206518    9602 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:51:55.209469    9602 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:51:55.212472    9602 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:51:55.215836    9602 config.go:182] Loaded profile config "cert-expiration-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:51:55.215901    9602 config.go:182] Loaded profile config "multinode-053000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:51:55.215950    9602 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:51:55.219471    9602 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 23:51:55.226551    9602 start.go:297] selected driver: qemu2
	I0914 23:51:55.226559    9602 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:51:55.226567    9602 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:51:55.228930    9602 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:51:55.230408    9602 out.go:177] * Automatically selected the socket_vmnet network
	I0914 23:51:55.233573    9602 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:51:55.233598    9602 cni.go:84] Creating CNI manager for "calico"
	I0914 23:51:55.233601    9602 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0914 23:51:55.233632    9602 start.go:340] cluster config:
	{Name:calico-262000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:51:55.237577    9602 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:51:55.246484    9602 out.go:177] * Starting "calico-262000" primary control-plane node in "calico-262000" cluster
	I0914 23:51:55.249562    9602 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:51:55.249584    9602 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:51:55.249598    9602 cache.go:56] Caching tarball of preloaded images
	I0914 23:51:55.249663    9602 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:51:55.249668    9602 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:51:55.249736    9602 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/calico-262000/config.json ...
	I0914 23:51:55.249751    9602 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/calico-262000/config.json: {Name:mka4ef2871c863609a4573692d8835011a14d7ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:51:55.249965    9602 start.go:360] acquireMachinesLock for calico-262000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:51:55.249998    9602 start.go:364] duration metric: took 27.334µs to acquireMachinesLock for "calico-262000"
	I0914 23:51:55.250009    9602 start.go:93] Provisioning new machine with config: &{Name:calico-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:51:55.250038    9602 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:51:55.256496    9602 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 23:51:55.274182    9602 start.go:159] libmachine.API.Create for "calico-262000" (driver="qemu2")
	I0914 23:51:55.274218    9602 client.go:168] LocalClient.Create starting
	I0914 23:51:55.274296    9602 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:51:55.274333    9602 main.go:141] libmachine: Decoding PEM data...
	I0914 23:51:55.274342    9602 main.go:141] libmachine: Parsing certificate...
	I0914 23:51:55.274384    9602 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:51:55.274407    9602 main.go:141] libmachine: Decoding PEM data...
	I0914 23:51:55.274417    9602 main.go:141] libmachine: Parsing certificate...
	I0914 23:51:55.274782    9602 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:51:55.437323    9602 main.go:141] libmachine: Creating SSH key...
	I0914 23:51:55.645779    9602 main.go:141] libmachine: Creating Disk image...
	I0914 23:51:55.645787    9602 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:51:55.646098    9602 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/calico-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/calico-262000/disk.qcow2
	I0914 23:51:55.656001    9602 main.go:141] libmachine: STDOUT: 
	I0914 23:51:55.656022    9602 main.go:141] libmachine: STDERR: 
	I0914 23:51:55.656090    9602 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/calico-262000/disk.qcow2 +20000M
	I0914 23:51:55.663988    9602 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:51:55.664006    9602 main.go:141] libmachine: STDERR: 
	I0914 23:51:55.664028    9602 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/calico-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/calico-262000/disk.qcow2
	I0914 23:51:55.664036    9602 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:51:55.664050    9602 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:51:55.664076    9602 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/calico-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/calico-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/calico-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:ae:35:1e:02:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/calico-262000/disk.qcow2
	I0914 23:51:55.665677    9602 main.go:141] libmachine: STDOUT: 
	I0914 23:51:55.665693    9602 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:51:55.665714    9602 client.go:171] duration metric: took 391.492917ms to LocalClient.Create
	I0914 23:51:57.667861    9602 start.go:128] duration metric: took 2.41783s to createHost
	I0914 23:51:57.667960    9602 start.go:83] releasing machines lock for "calico-262000", held for 2.417979917s
	W0914 23:51:57.668012    9602 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:51:57.679334    9602 out.go:177] * Deleting "calico-262000" in qemu2 ...
	W0914 23:51:57.710705    9602 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:51:57.710727    9602 start.go:729] Will try again in 5 seconds ...
	I0914 23:52:02.713033    9602 start.go:360] acquireMachinesLock for calico-262000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:52:02.713482    9602 start.go:364] duration metric: took 355.584µs to acquireMachinesLock for "calico-262000"
	I0914 23:52:02.713631    9602 start.go:93] Provisioning new machine with config: &{Name:calico-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:52:02.713941    9602 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:52:02.733768    9602 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 23:52:02.784660    9602 start.go:159] libmachine.API.Create for "calico-262000" (driver="qemu2")
	I0914 23:52:02.784708    9602 client.go:168] LocalClient.Create starting
	I0914 23:52:02.784833    9602 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:52:02.784913    9602 main.go:141] libmachine: Decoding PEM data...
	I0914 23:52:02.784928    9602 main.go:141] libmachine: Parsing certificate...
	I0914 23:52:02.784986    9602 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:52:02.785034    9602 main.go:141] libmachine: Decoding PEM data...
	I0914 23:52:02.785048    9602 main.go:141] libmachine: Parsing certificate...
	I0914 23:52:02.785683    9602 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:52:02.958063    9602 main.go:141] libmachine: Creating SSH key...
	I0914 23:52:02.993489    9602 main.go:141] libmachine: Creating Disk image...
	I0914 23:52:02.993494    9602 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:52:02.993754    9602 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/calico-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/calico-262000/disk.qcow2
	I0914 23:52:03.002979    9602 main.go:141] libmachine: STDOUT: 
	I0914 23:52:03.002998    9602 main.go:141] libmachine: STDERR: 
	I0914 23:52:03.003058    9602 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/calico-262000/disk.qcow2 +20000M
	I0914 23:52:03.010824    9602 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:52:03.010845    9602 main.go:141] libmachine: STDERR: 
	I0914 23:52:03.010855    9602 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/calico-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/calico-262000/disk.qcow2
	I0914 23:52:03.010859    9602 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:52:03.010868    9602 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:52:03.010912    9602 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/calico-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/calico-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/calico-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:dd:a7:73:fd:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/calico-262000/disk.qcow2
	I0914 23:52:03.012578    9602 main.go:141] libmachine: STDOUT: 
	I0914 23:52:03.012593    9602 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:52:03.012605    9602 client.go:171] duration metric: took 227.893541ms to LocalClient.Create
	I0914 23:52:05.014746    9602 start.go:128] duration metric: took 2.300770917s to createHost
	I0914 23:52:05.014847    9602 start.go:83] releasing machines lock for "calico-262000", held for 2.3013615s
	W0914 23:52:05.015301    9602 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:52:05.028657    9602 out.go:201] 
	W0914 23:52:05.034870    9602 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:52:05.034929    9602 out.go:270] * 
	* 
	W0914 23:52:05.037498    9602 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:52:05.046712    9602 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-262000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-262000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.951892334s)

                                                
                                                
-- stdout --
	* [custom-flannel-262000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-262000" primary control-plane node in "custom-flannel-262000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-262000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:52:07.504769    9719 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:52:07.504896    9719 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:52:07.504903    9719 out.go:358] Setting ErrFile to fd 2...
	I0914 23:52:07.504906    9719 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:52:07.505053    9719 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:52:07.506095    9719 out.go:352] Setting JSON to false
	I0914 23:52:07.522102    9719 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6696,"bootTime":1726376431,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:52:07.522162    9719 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:52:07.527977    9719 out.go:177] * [custom-flannel-262000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:52:07.535765    9719 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:52:07.535828    9719 notify.go:220] Checking for updates...
	I0914 23:52:07.542896    9719 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:52:07.544271    9719 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:52:07.546878    9719 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:52:07.549900    9719 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:52:07.552883    9719 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:52:07.556268    9719 config.go:182] Loaded profile config "cert-expiration-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:52:07.556337    9719 config.go:182] Loaded profile config "multinode-053000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:52:07.556387    9719 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:52:07.560837    9719 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 23:52:07.567856    9719 start.go:297] selected driver: qemu2
	I0914 23:52:07.567862    9719 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:52:07.567868    9719 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:52:07.570190    9719 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:52:07.572879    9719 out.go:177] * Automatically selected the socket_vmnet network
	I0914 23:52:07.575938    9719 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:52:07.575960    9719 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0914 23:52:07.575975    9719 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0914 23:52:07.576011    9719 start.go:340] cluster config:
	{Name:custom-flannel-262000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:52:07.579939    9719 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:52:07.586825    9719 out.go:177] * Starting "custom-flannel-262000" primary control-plane node in "custom-flannel-262000" cluster
	I0914 23:52:07.589832    9719 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:52:07.589857    9719 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:52:07.589871    9719 cache.go:56] Caching tarball of preloaded images
	I0914 23:52:07.589937    9719 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:52:07.589943    9719 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:52:07.590021    9719 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/custom-flannel-262000/config.json ...
	I0914 23:52:07.590032    9719 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/custom-flannel-262000/config.json: {Name:mk587d2eb1c86638237cc619a56158c7069efe18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:52:07.590375    9719 start.go:360] acquireMachinesLock for custom-flannel-262000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:52:07.590409    9719 start.go:364] duration metric: took 27.291µs to acquireMachinesLock for "custom-flannel-262000"
	I0914 23:52:07.590420    9719 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:52:07.590444    9719 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:52:07.594871    9719 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 23:52:07.612348    9719 start.go:159] libmachine.API.Create for "custom-flannel-262000" (driver="qemu2")
	I0914 23:52:07.612382    9719 client.go:168] LocalClient.Create starting
	I0914 23:52:07.612448    9719 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:52:07.612476    9719 main.go:141] libmachine: Decoding PEM data...
	I0914 23:52:07.612486    9719 main.go:141] libmachine: Parsing certificate...
	I0914 23:52:07.612522    9719 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:52:07.612545    9719 main.go:141] libmachine: Decoding PEM data...
	I0914 23:52:07.612555    9719 main.go:141] libmachine: Parsing certificate...
	I0914 23:52:07.612919    9719 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:52:07.775335    9719 main.go:141] libmachine: Creating SSH key...
	I0914 23:52:07.874794    9719 main.go:141] libmachine: Creating Disk image...
	I0914 23:52:07.874800    9719 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:52:07.875049    9719 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/custom-flannel-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/custom-flannel-262000/disk.qcow2
	I0914 23:52:07.884317    9719 main.go:141] libmachine: STDOUT: 
	I0914 23:52:07.884335    9719 main.go:141] libmachine: STDERR: 
	I0914 23:52:07.884401    9719 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/custom-flannel-262000/disk.qcow2 +20000M
	I0914 23:52:07.892184    9719 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:52:07.892199    9719 main.go:141] libmachine: STDERR: 
	I0914 23:52:07.892221    9719 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/custom-flannel-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/custom-flannel-262000/disk.qcow2
	I0914 23:52:07.892228    9719 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:52:07.892244    9719 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:52:07.892270    9719 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/custom-flannel-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/custom-flannel-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/custom-flannel-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:3a:ca:9b:2d:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/custom-flannel-262000/disk.qcow2
	I0914 23:52:07.893923    9719 main.go:141] libmachine: STDOUT: 
	I0914 23:52:07.893937    9719 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:52:07.893958    9719 client.go:171] duration metric: took 281.573042ms to LocalClient.Create
	I0914 23:52:09.896129    9719 start.go:128] duration metric: took 2.30569s to createHost
	I0914 23:52:09.896195    9719 start.go:83] releasing machines lock for "custom-flannel-262000", held for 2.305800208s
	W0914 23:52:09.896233    9719 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:52:09.907501    9719 out.go:177] * Deleting "custom-flannel-262000" in qemu2 ...
	W0914 23:52:09.940401    9719 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:52:09.940425    9719 start.go:729] Will try again in 5 seconds ...
	I0914 23:52:14.941915    9719 start.go:360] acquireMachinesLock for custom-flannel-262000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:52:14.942319    9719 start.go:364] duration metric: took 311.167µs to acquireMachinesLock for "custom-flannel-262000"
	I0914 23:52:14.942426    9719 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:52:14.942712    9719 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:52:14.949315    9719 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 23:52:14.999688    9719 start.go:159] libmachine.API.Create for "custom-flannel-262000" (driver="qemu2")
	I0914 23:52:14.999736    9719 client.go:168] LocalClient.Create starting
	I0914 23:52:14.999862    9719 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:52:14.999919    9719 main.go:141] libmachine: Decoding PEM data...
	I0914 23:52:14.999942    9719 main.go:141] libmachine: Parsing certificate...
	I0914 23:52:15.000003    9719 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:52:15.000047    9719 main.go:141] libmachine: Decoding PEM data...
	I0914 23:52:15.000063    9719 main.go:141] libmachine: Parsing certificate...
	I0914 23:52:15.000593    9719 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:52:15.172530    9719 main.go:141] libmachine: Creating SSH key...
	I0914 23:52:15.359180    9719 main.go:141] libmachine: Creating Disk image...
	I0914 23:52:15.359186    9719 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:52:15.359443    9719 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/custom-flannel-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/custom-flannel-262000/disk.qcow2
	I0914 23:52:15.369053    9719 main.go:141] libmachine: STDOUT: 
	I0914 23:52:15.369074    9719 main.go:141] libmachine: STDERR: 
	I0914 23:52:15.369133    9719 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/custom-flannel-262000/disk.qcow2 +20000M
	I0914 23:52:15.377053    9719 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:52:15.377069    9719 main.go:141] libmachine: STDERR: 
	I0914 23:52:15.377082    9719 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/custom-flannel-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/custom-flannel-262000/disk.qcow2
	I0914 23:52:15.377086    9719 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:52:15.377098    9719 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:52:15.377130    9719 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/custom-flannel-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/custom-flannel-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/custom-flannel-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:64:94:c8:b4:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/custom-flannel-262000/disk.qcow2
	I0914 23:52:15.378800    9719 main.go:141] libmachine: STDOUT: 
	I0914 23:52:15.378815    9719 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:52:15.378831    9719 client.go:171] duration metric: took 379.093542ms to LocalClient.Create
	I0914 23:52:17.380984    9719 start.go:128] duration metric: took 2.438273209s to createHost
	I0914 23:52:17.381045    9719 start.go:83] releasing machines lock for "custom-flannel-262000", held for 2.438727416s
	W0914 23:52:17.381406    9719 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:52:17.397061    9719 out.go:201] 
	W0914 23:52:17.402022    9719 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:52:17.402049    9719 out.go:270] * 
	* 
	W0914 23:52:17.404562    9719 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:52:17.414007    9719 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-262000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-262000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.824016292s)

                                                
                                                
-- stdout --
	* [false-262000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-262000" primary control-plane node in "false-262000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-262000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:52:19.857305    9836 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:52:19.857428    9836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:52:19.857432    9836 out.go:358] Setting ErrFile to fd 2...
	I0914 23:52:19.857434    9836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:52:19.857565    9836 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:52:19.858695    9836 out.go:352] Setting JSON to false
	I0914 23:52:19.874642    9836 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6708,"bootTime":1726376431,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:52:19.874710    9836 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:52:19.881032    9836 out.go:177] * [false-262000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:52:19.888007    9836 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:52:19.888053    9836 notify.go:220] Checking for updates...
	I0914 23:52:19.894942    9836 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:52:19.898015    9836 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:52:19.901985    9836 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:52:19.904955    9836 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:52:19.907961    9836 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:52:19.911330    9836 config.go:182] Loaded profile config "cert-expiration-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:52:19.911411    9836 config.go:182] Loaded profile config "multinode-053000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:52:19.911462    9836 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:52:19.913892    9836 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 23:52:19.921007    9836 start.go:297] selected driver: qemu2
	I0914 23:52:19.921014    9836 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:52:19.921022    9836 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:52:19.923243    9836 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:52:19.924464    9836 out.go:177] * Automatically selected the socket_vmnet network
	I0914 23:52:19.927088    9836 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:52:19.927108    9836 cni.go:84] Creating CNI manager for "false"
	I0914 23:52:19.927145    9836 start.go:340] cluster config:
	{Name:false-262000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:52:19.930671    9836 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:52:19.938964    9836 out.go:177] * Starting "false-262000" primary control-plane node in "false-262000" cluster
	I0914 23:52:19.942907    9836 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:52:19.942929    9836 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:52:19.942938    9836 cache.go:56] Caching tarball of preloaded images
	I0914 23:52:19.942994    9836 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:52:19.942999    9836 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:52:19.943051    9836 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/false-262000/config.json ...
	I0914 23:52:19.943062    9836 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/false-262000/config.json: {Name:mk3e7a250817eeaf41b1ef0a13fd14a0cd9391b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:52:19.943406    9836 start.go:360] acquireMachinesLock for false-262000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:52:19.943437    9836 start.go:364] duration metric: took 25.667µs to acquireMachinesLock for "false-262000"
	I0914 23:52:19.943446    9836 start.go:93] Provisioning new machine with config: &{Name:false-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:52:19.943473    9836 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:52:19.951983    9836 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 23:52:19.969117    9836 start.go:159] libmachine.API.Create for "false-262000" (driver="qemu2")
	I0914 23:52:19.969146    9836 client.go:168] LocalClient.Create starting
	I0914 23:52:19.969208    9836 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:52:19.969241    9836 main.go:141] libmachine: Decoding PEM data...
	I0914 23:52:19.969250    9836 main.go:141] libmachine: Parsing certificate...
	I0914 23:52:19.969288    9836 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:52:19.969313    9836 main.go:141] libmachine: Decoding PEM data...
	I0914 23:52:19.969323    9836 main.go:141] libmachine: Parsing certificate...
	I0914 23:52:19.969783    9836 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:52:20.133027    9836 main.go:141] libmachine: Creating SSH key...
	I0914 23:52:20.178775    9836 main.go:141] libmachine: Creating Disk image...
	I0914 23:52:20.178784    9836 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:52:20.179022    9836 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/false-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/false-262000/disk.qcow2
	I0914 23:52:20.188302    9836 main.go:141] libmachine: STDOUT: 
	I0914 23:52:20.188322    9836 main.go:141] libmachine: STDERR: 
	I0914 23:52:20.188386    9836 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/false-262000/disk.qcow2 +20000M
	I0914 23:52:20.196224    9836 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:52:20.196240    9836 main.go:141] libmachine: STDERR: 
	I0914 23:52:20.196255    9836 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/false-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/false-262000/disk.qcow2
	I0914 23:52:20.196261    9836 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:52:20.196275    9836 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:52:20.196306    9836 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/false-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/false-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/false-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:3b:85:46:88:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/false-262000/disk.qcow2
	I0914 23:52:20.197963    9836 main.go:141] libmachine: STDOUT: 
	I0914 23:52:20.197977    9836 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:52:20.198000    9836 client.go:171] duration metric: took 228.850042ms to LocalClient.Create
	I0914 23:52:22.200138    9836 start.go:128] duration metric: took 2.256670375s to createHost
	I0914 23:52:22.200198    9836 start.go:83] releasing machines lock for "false-262000", held for 2.25677875s
	W0914 23:52:22.200240    9836 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:52:22.214473    9836 out.go:177] * Deleting "false-262000" in qemu2 ...
	W0914 23:52:22.249153    9836 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:52:22.249198    9836 start.go:729] Will try again in 5 seconds ...
	I0914 23:52:27.251364    9836 start.go:360] acquireMachinesLock for false-262000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:52:27.251729    9836 start.go:364] duration metric: took 290.708µs to acquireMachinesLock for "false-262000"
	I0914 23:52:27.251827    9836 start.go:93] Provisioning new machine with config: &{Name:false-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:52:27.252334    9836 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:52:27.262877    9836 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 23:52:27.313374    9836 start.go:159] libmachine.API.Create for "false-262000" (driver="qemu2")
	I0914 23:52:27.313432    9836 client.go:168] LocalClient.Create starting
	I0914 23:52:27.313538    9836 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:52:27.313609    9836 main.go:141] libmachine: Decoding PEM data...
	I0914 23:52:27.313625    9836 main.go:141] libmachine: Parsing certificate...
	I0914 23:52:27.313684    9836 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:52:27.313727    9836 main.go:141] libmachine: Decoding PEM data...
	I0914 23:52:27.313745    9836 main.go:141] libmachine: Parsing certificate...
	I0914 23:52:27.314366    9836 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:52:27.486505    9836 main.go:141] libmachine: Creating SSH key...
	I0914 23:52:27.588083    9836 main.go:141] libmachine: Creating Disk image...
	I0914 23:52:27.588089    9836 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:52:27.588334    9836 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/false-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/false-262000/disk.qcow2
	I0914 23:52:27.597665    9836 main.go:141] libmachine: STDOUT: 
	I0914 23:52:27.597680    9836 main.go:141] libmachine: STDERR: 
	I0914 23:52:27.597747    9836 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/false-262000/disk.qcow2 +20000M
	I0914 23:52:27.605586    9836 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:52:27.605602    9836 main.go:141] libmachine: STDERR: 
	I0914 23:52:27.605617    9836 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/false-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/false-262000/disk.qcow2
	I0914 23:52:27.605621    9836 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:52:27.605629    9836 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:52:27.605680    9836 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/false-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/false-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/false-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:df:86:77:bb:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/false-262000/disk.qcow2
	I0914 23:52:27.607356    9836 main.go:141] libmachine: STDOUT: 
	I0914 23:52:27.607369    9836 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:52:27.607381    9836 client.go:171] duration metric: took 293.947625ms to LocalClient.Create
	I0914 23:52:29.609535    9836 start.go:128] duration metric: took 2.357197792s to createHost
	I0914 23:52:29.609593    9836 start.go:83] releasing machines lock for "false-262000", held for 2.357868834s
	W0914 23:52:29.609943    9836 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:52:29.619530    9836 out.go:201] 
	W0914 23:52:29.626473    9836 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:52:29.626498    9836 out.go:270] * 
	* 
	W0914 23:52:29.629431    9836 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:52:29.637517    9836 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-262000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-262000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.958400958s)

                                                
                                                
-- stdout --
	* [enable-default-cni-262000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-262000" primary control-plane node in "enable-default-cni-262000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-262000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:52:31.815886    9945 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:52:31.816021    9945 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:52:31.816024    9945 out.go:358] Setting ErrFile to fd 2...
	I0914 23:52:31.816027    9945 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:52:31.816159    9945 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:52:31.817296    9945 out.go:352] Setting JSON to false
	I0914 23:52:31.833469    9945 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6720,"bootTime":1726376431,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:52:31.833535    9945 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:52:31.838409    9945 out.go:177] * [enable-default-cni-262000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:52:31.846368    9945 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:52:31.846411    9945 notify.go:220] Checking for updates...
	I0914 23:52:31.854358    9945 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:52:31.857322    9945 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:52:31.861356    9945 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:52:31.864402    9945 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:52:31.867306    9945 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:52:31.870665    9945 config.go:182] Loaded profile config "cert-expiration-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:52:31.870743    9945 config.go:182] Loaded profile config "multinode-053000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:52:31.870791    9945 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:52:31.875367    9945 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 23:52:31.882371    9945 start.go:297] selected driver: qemu2
	I0914 23:52:31.882376    9945 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:52:31.882386    9945 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:52:31.884885    9945 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:52:31.887384    9945 out.go:177] * Automatically selected the socket_vmnet network
	E0914 23:52:31.891425    9945 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0914 23:52:31.891444    9945 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:52:31.891477    9945 cni.go:84] Creating CNI manager for "bridge"
	I0914 23:52:31.891487    9945 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 23:52:31.891519    9945 start.go:340] cluster config:
	{Name:enable-default-cni-262000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:52:31.895258    9945 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:52:31.904335    9945 out.go:177] * Starting "enable-default-cni-262000" primary control-plane node in "enable-default-cni-262000" cluster
	I0914 23:52:31.912391    9945 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:52:31.912406    9945 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:52:31.912423    9945 cache.go:56] Caching tarball of preloaded images
	I0914 23:52:31.912505    9945 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:52:31.912511    9945 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:52:31.912579    9945 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/enable-default-cni-262000/config.json ...
	I0914 23:52:31.912590    9945 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/enable-default-cni-262000/config.json: {Name:mkde692fdbf3f61daebd20f86451d97e64a562df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:52:31.912947    9945 start.go:360] acquireMachinesLock for enable-default-cni-262000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:52:31.912984    9945 start.go:364] duration metric: took 29.291µs to acquireMachinesLock for "enable-default-cni-262000"
	I0914 23:52:31.912995    9945 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:52:31.913027    9945 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:52:31.920333    9945 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 23:52:31.939282    9945 start.go:159] libmachine.API.Create for "enable-default-cni-262000" (driver="qemu2")
	I0914 23:52:31.939313    9945 client.go:168] LocalClient.Create starting
	I0914 23:52:31.939389    9945 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:52:31.939423    9945 main.go:141] libmachine: Decoding PEM data...
	I0914 23:52:31.939433    9945 main.go:141] libmachine: Parsing certificate...
	I0914 23:52:31.939472    9945 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:52:31.939498    9945 main.go:141] libmachine: Decoding PEM data...
	I0914 23:52:31.939507    9945 main.go:141] libmachine: Parsing certificate...
	I0914 23:52:31.939921    9945 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:52:32.102763    9945 main.go:141] libmachine: Creating SSH key...
	I0914 23:52:32.204640    9945 main.go:141] libmachine: Creating Disk image...
	I0914 23:52:32.204646    9945 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:52:32.204883    9945 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/enable-default-cni-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/enable-default-cni-262000/disk.qcow2
	I0914 23:52:32.214312    9945 main.go:141] libmachine: STDOUT: 
	I0914 23:52:32.214337    9945 main.go:141] libmachine: STDERR: 
	I0914 23:52:32.214399    9945 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/enable-default-cni-262000/disk.qcow2 +20000M
	I0914 23:52:32.222209    9945 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:52:32.222223    9945 main.go:141] libmachine: STDERR: 
	I0914 23:52:32.222241    9945 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/enable-default-cni-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/enable-default-cni-262000/disk.qcow2
	I0914 23:52:32.222251    9945 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:52:32.222261    9945 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:52:32.222286    9945 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/enable-default-cni-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/enable-default-cni-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/enable-default-cni-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:27:c6:26:9f:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/enable-default-cni-262000/disk.qcow2
	I0914 23:52:32.223934    9945 main.go:141] libmachine: STDOUT: 
	I0914 23:52:32.223950    9945 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:52:32.223969    9945 client.go:171] duration metric: took 284.649083ms to LocalClient.Create
	I0914 23:52:34.226154    9945 start.go:128] duration metric: took 2.313136959s to createHost
	I0914 23:52:34.226219    9945 start.go:83] releasing machines lock for "enable-default-cni-262000", held for 2.3132525s
	W0914 23:52:34.226281    9945 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:52:34.237403    9945 out.go:177] * Deleting "enable-default-cni-262000" in qemu2 ...
	W0914 23:52:34.272470    9945 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:52:34.272497    9945 start.go:729] Will try again in 5 seconds ...
	I0914 23:52:39.274678    9945 start.go:360] acquireMachinesLock for enable-default-cni-262000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:52:39.275128    9945 start.go:364] duration metric: took 360.833µs to acquireMachinesLock for "enable-default-cni-262000"
	I0914 23:52:39.275250    9945 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:52:39.275529    9945 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:52:39.285021    9945 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 23:52:39.335171    9945 start.go:159] libmachine.API.Create for "enable-default-cni-262000" (driver="qemu2")
	I0914 23:52:39.335226    9945 client.go:168] LocalClient.Create starting
	I0914 23:52:39.335342    9945 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:52:39.335411    9945 main.go:141] libmachine: Decoding PEM data...
	I0914 23:52:39.335426    9945 main.go:141] libmachine: Parsing certificate...
	I0914 23:52:39.335494    9945 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:52:39.335541    9945 main.go:141] libmachine: Decoding PEM data...
	I0914 23:52:39.335554    9945 main.go:141] libmachine: Parsing certificate...
	I0914 23:52:39.336085    9945 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:52:39.508639    9945 main.go:141] libmachine: Creating SSH key...
	I0914 23:52:39.678574    9945 main.go:141] libmachine: Creating Disk image...
	I0914 23:52:39.678590    9945 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:52:39.678865    9945 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/enable-default-cni-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/enable-default-cni-262000/disk.qcow2
	I0914 23:52:39.688355    9945 main.go:141] libmachine: STDOUT: 
	I0914 23:52:39.688370    9945 main.go:141] libmachine: STDERR: 
	I0914 23:52:39.688419    9945 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/enable-default-cni-262000/disk.qcow2 +20000M
	I0914 23:52:39.696272    9945 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:52:39.696291    9945 main.go:141] libmachine: STDERR: 
	I0914 23:52:39.696308    9945 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/enable-default-cni-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/enable-default-cni-262000/disk.qcow2
	I0914 23:52:39.696315    9945 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:52:39.696324    9945 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:52:39.696359    9945 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/enable-default-cni-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/enable-default-cni-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/enable-default-cni-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:df:53:26:14:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/enable-default-cni-262000/disk.qcow2
	I0914 23:52:39.698062    9945 main.go:141] libmachine: STDOUT: 
	I0914 23:52:39.698073    9945 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:52:39.698089    9945 client.go:171] duration metric: took 362.862625ms to LocalClient.Create
	I0914 23:52:41.700240    9945 start.go:128] duration metric: took 2.424710041s to createHost
	I0914 23:52:41.700295    9945 start.go:83] releasing machines lock for "enable-default-cni-262000", held for 2.425172958s
	W0914 23:52:41.700768    9945 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:52:41.712442    9945 out.go:201] 
	W0914 23:52:41.716566    9945 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:52:41.716613    9945 out.go:270] * 
	* 
	W0914 23:52:41.719415    9945 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:52:41.730407    9945 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (10.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-262000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-262000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (10.03141275s)

                                                
                                                
-- stdout --
	* [flannel-262000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-262000" primary control-plane node in "flannel-262000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-262000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:52:43.907626   10054 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:52:43.907747   10054 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:52:43.907751   10054 out.go:358] Setting ErrFile to fd 2...
	I0914 23:52:43.907754   10054 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:52:43.907914   10054 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:52:43.908966   10054 out.go:352] Setting JSON to false
	I0914 23:52:43.925399   10054 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6732,"bootTime":1726376431,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:52:43.925469   10054 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:52:43.931029   10054 out.go:177] * [flannel-262000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:52:43.938950   10054 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:52:43.939003   10054 notify.go:220] Checking for updates...
	I0914 23:52:43.945861   10054 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:52:43.948911   10054 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:52:43.952845   10054 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:52:43.955918   10054 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:52:43.958855   10054 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:52:43.962124   10054 config.go:182] Loaded profile config "cert-expiration-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:52:43.962194   10054 config.go:182] Loaded profile config "multinode-053000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:52:43.962257   10054 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:52:43.966924   10054 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 23:52:43.973820   10054 start.go:297] selected driver: qemu2
	I0914 23:52:43.973826   10054 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:52:43.973832   10054 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:52:43.976248   10054 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:52:43.978887   10054 out.go:177] * Automatically selected the socket_vmnet network
	I0914 23:52:43.981947   10054 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:52:43.981964   10054 cni.go:84] Creating CNI manager for "flannel"
	I0914 23:52:43.981973   10054 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0914 23:52:43.982008   10054 start.go:340] cluster config:
	{Name:flannel-262000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:52:43.985906   10054 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:52:43.994910   10054 out.go:177] * Starting "flannel-262000" primary control-plane node in "flannel-262000" cluster
	I0914 23:52:43.997851   10054 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:52:43.997868   10054 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:52:43.997881   10054 cache.go:56] Caching tarball of preloaded images
	I0914 23:52:43.997954   10054 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:52:43.997960   10054 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:52:43.998025   10054 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/flannel-262000/config.json ...
	I0914 23:52:43.998042   10054 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/flannel-262000/config.json: {Name:mkffc7cb24f3e84e2c466656b6b9cc4351207faa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:52:43.998271   10054 start.go:360] acquireMachinesLock for flannel-262000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:52:43.998306   10054 start.go:364] duration metric: took 28.584µs to acquireMachinesLock for "flannel-262000"
	I0914 23:52:43.998317   10054 start.go:93] Provisioning new machine with config: &{Name:flannel-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:52:43.998349   10054 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:52:44.005796   10054 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 23:52:44.024798   10054 start.go:159] libmachine.API.Create for "flannel-262000" (driver="qemu2")
	I0914 23:52:44.024836   10054 client.go:168] LocalClient.Create starting
	I0914 23:52:44.024901   10054 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:52:44.024930   10054 main.go:141] libmachine: Decoding PEM data...
	I0914 23:52:44.024939   10054 main.go:141] libmachine: Parsing certificate...
	I0914 23:52:44.024981   10054 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:52:44.025005   10054 main.go:141] libmachine: Decoding PEM data...
	I0914 23:52:44.025014   10054 main.go:141] libmachine: Parsing certificate...
	I0914 23:52:44.025421   10054 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:52:44.187818   10054 main.go:141] libmachine: Creating SSH key...
	I0914 23:52:44.365017   10054 main.go:141] libmachine: Creating Disk image...
	I0914 23:52:44.365024   10054 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:52:44.365294   10054 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/flannel-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/flannel-262000/disk.qcow2
	I0914 23:52:44.374875   10054 main.go:141] libmachine: STDOUT: 
	I0914 23:52:44.374895   10054 main.go:141] libmachine: STDERR: 
	I0914 23:52:44.374948   10054 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/flannel-262000/disk.qcow2 +20000M
	I0914 23:52:44.382801   10054 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:52:44.382813   10054 main.go:141] libmachine: STDERR: 
	I0914 23:52:44.382835   10054 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/flannel-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/flannel-262000/disk.qcow2
	I0914 23:52:44.382841   10054 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:52:44.382856   10054 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:52:44.382881   10054 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/flannel-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/flannel-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/flannel-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:f8:14:de:de:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/flannel-262000/disk.qcow2
	I0914 23:52:44.384516   10054 main.go:141] libmachine: STDOUT: 
	I0914 23:52:44.384529   10054 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:52:44.384550   10054 client.go:171] duration metric: took 359.711375ms to LocalClient.Create
	I0914 23:52:46.386734   10054 start.go:128] duration metric: took 2.3883845s to createHost
	I0914 23:52:46.386831   10054 start.go:83] releasing machines lock for "flannel-262000", held for 2.388543792s
	W0914 23:52:46.386893   10054 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:52:46.402300   10054 out.go:177] * Deleting "flannel-262000" in qemu2 ...
	W0914 23:52:46.433418   10054 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:52:46.433452   10054 start.go:729] Will try again in 5 seconds ...
	I0914 23:52:51.435592   10054 start.go:360] acquireMachinesLock for flannel-262000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:52:51.436029   10054 start.go:364] duration metric: took 357.875µs to acquireMachinesLock for "flannel-262000"
	I0914 23:52:51.436148   10054 start.go:93] Provisioning new machine with config: &{Name:flannel-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:52:51.436489   10054 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:52:51.453075   10054 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 23:52:51.503937   10054 start.go:159] libmachine.API.Create for "flannel-262000" (driver="qemu2")
	I0914 23:52:51.503998   10054 client.go:168] LocalClient.Create starting
	I0914 23:52:51.504111   10054 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:52:51.504178   10054 main.go:141] libmachine: Decoding PEM data...
	I0914 23:52:51.504192   10054 main.go:141] libmachine: Parsing certificate...
	I0914 23:52:51.504280   10054 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:52:51.504333   10054 main.go:141] libmachine: Decoding PEM data...
	I0914 23:52:51.504345   10054 main.go:141] libmachine: Parsing certificate...
	I0914 23:52:51.504901   10054 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:52:51.676923   10054 main.go:141] libmachine: Creating SSH key...
	I0914 23:52:51.845296   10054 main.go:141] libmachine: Creating Disk image...
	I0914 23:52:51.845303   10054 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:52:51.845567   10054 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/flannel-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/flannel-262000/disk.qcow2
	I0914 23:52:51.855392   10054 main.go:141] libmachine: STDOUT: 
	I0914 23:52:51.855408   10054 main.go:141] libmachine: STDERR: 
	I0914 23:52:51.855478   10054 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/flannel-262000/disk.qcow2 +20000M
	I0914 23:52:51.863557   10054 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:52:51.863593   10054 main.go:141] libmachine: STDERR: 
	I0914 23:52:51.863607   10054 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/flannel-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/flannel-262000/disk.qcow2
	I0914 23:52:51.863612   10054 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:52:51.863626   10054 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:52:51.863655   10054 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/flannel-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/flannel-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/flannel-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:c7:5a:4a:0d:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/flannel-262000/disk.qcow2
	I0914 23:52:51.865333   10054 main.go:141] libmachine: STDOUT: 
	I0914 23:52:51.865346   10054 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:52:51.865361   10054 client.go:171] duration metric: took 361.358417ms to LocalClient.Create
	I0914 23:52:53.867508   10054 start.go:128] duration metric: took 2.43101675s to createHost
	I0914 23:52:53.867581   10054 start.go:83] releasing machines lock for "flannel-262000", held for 2.431556292s
	W0914 23:52:53.867935   10054 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:52:53.876469   10054 out.go:201] 
	W0914 23:52:53.884656   10054 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:52:53.884684   10054 out.go:270] * 
	* 
	W0914 23:52:53.887314   10054 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:52:53.895389   10054 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (10.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-262000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-262000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.850739708s)

                                                
                                                
-- stdout --
	* [bridge-262000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-262000" primary control-plane node in "bridge-262000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-262000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:52:56.265315   10171 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:52:56.265447   10171 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:52:56.265450   10171 out.go:358] Setting ErrFile to fd 2...
	I0914 23:52:56.265452   10171 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:52:56.265589   10171 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:52:56.266647   10171 out.go:352] Setting JSON to false
	I0914 23:52:56.282724   10171 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6745,"bootTime":1726376431,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:52:56.282784   10171 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:52:56.288586   10171 out.go:177] * [bridge-262000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:52:56.295542   10171 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:52:56.295600   10171 notify.go:220] Checking for updates...
	I0914 23:52:56.302565   10171 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:52:56.305501   10171 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:52:56.308571   10171 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:52:56.311506   10171 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:52:56.314507   10171 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:52:56.317843   10171 config.go:182] Loaded profile config "cert-expiration-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:52:56.317910   10171 config.go:182] Loaded profile config "multinode-053000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:52:56.317956   10171 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:52:56.321379   10171 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 23:52:56.328493   10171 start.go:297] selected driver: qemu2
	I0914 23:52:56.328499   10171 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:52:56.328504   10171 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:52:56.330813   10171 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:52:56.333491   10171 out.go:177] * Automatically selected the socket_vmnet network
	I0914 23:52:56.336637   10171 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:52:56.336657   10171 cni.go:84] Creating CNI manager for "bridge"
	I0914 23:52:56.336660   10171 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 23:52:56.336703   10171 start.go:340] cluster config:
	{Name:bridge-262000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:52:56.340313   10171 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:52:56.348526   10171 out.go:177] * Starting "bridge-262000" primary control-plane node in "bridge-262000" cluster
	I0914 23:52:56.352490   10171 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:52:56.352503   10171 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:52:56.352514   10171 cache.go:56] Caching tarball of preloaded images
	I0914 23:52:56.352566   10171 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:52:56.352571   10171 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:52:56.352630   10171 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/bridge-262000/config.json ...
	I0914 23:52:56.352641   10171 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/bridge-262000/config.json: {Name:mk868882e3ab3dbd64da97836422a0a4f3177c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:52:56.352857   10171 start.go:360] acquireMachinesLock for bridge-262000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:52:56.352889   10171 start.go:364] duration metric: took 26.708µs to acquireMachinesLock for "bridge-262000"
	I0914 23:52:56.352900   10171 start.go:93] Provisioning new machine with config: &{Name:bridge-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:52:56.352935   10171 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:52:56.361520   10171 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 23:52:56.378928   10171 start.go:159] libmachine.API.Create for "bridge-262000" (driver="qemu2")
	I0914 23:52:56.378953   10171 client.go:168] LocalClient.Create starting
	I0914 23:52:56.379013   10171 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:52:56.379044   10171 main.go:141] libmachine: Decoding PEM data...
	I0914 23:52:56.379065   10171 main.go:141] libmachine: Parsing certificate...
	I0914 23:52:56.379105   10171 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:52:56.379128   10171 main.go:141] libmachine: Decoding PEM data...
	I0914 23:52:56.379137   10171 main.go:141] libmachine: Parsing certificate...
	I0914 23:52:56.379481   10171 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:52:56.544529   10171 main.go:141] libmachine: Creating SSH key...
	I0914 23:52:56.654669   10171 main.go:141] libmachine: Creating Disk image...
	I0914 23:52:56.654678   10171 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:52:56.654927   10171 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/bridge-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/bridge-262000/disk.qcow2
	I0914 23:52:56.664504   10171 main.go:141] libmachine: STDOUT: 
	I0914 23:52:56.664518   10171 main.go:141] libmachine: STDERR: 
	I0914 23:52:56.664566   10171 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/bridge-262000/disk.qcow2 +20000M
	I0914 23:52:56.672419   10171 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:52:56.672439   10171 main.go:141] libmachine: STDERR: 
	I0914 23:52:56.672454   10171 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/bridge-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/bridge-262000/disk.qcow2
	I0914 23:52:56.672461   10171 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:52:56.672470   10171 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:52:56.672497   10171 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/bridge-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/bridge-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/bridge-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:ce:eb:49:88:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/bridge-262000/disk.qcow2
	I0914 23:52:56.674129   10171 main.go:141] libmachine: STDOUT: 
	I0914 23:52:56.674141   10171 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:52:56.674162   10171 client.go:171] duration metric: took 295.206041ms to LocalClient.Create
	I0914 23:52:58.676311   10171 start.go:128] duration metric: took 2.323376709s to createHost
	I0914 23:52:58.676379   10171 start.go:83] releasing machines lock for "bridge-262000", held for 2.323507959s
	W0914 23:52:58.676431   10171 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:52:58.687682   10171 out.go:177] * Deleting "bridge-262000" in qemu2 ...
	W0914 23:52:58.718534   10171 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:52:58.718553   10171 start.go:729] Will try again in 5 seconds ...
	I0914 23:53:03.720793   10171 start.go:360] acquireMachinesLock for bridge-262000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:53:03.721280   10171 start.go:364] duration metric: took 384.625µs to acquireMachinesLock for "bridge-262000"
	I0914 23:53:03.721394   10171 start.go:93] Provisioning new machine with config: &{Name:bridge-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:53:03.721680   10171 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:53:03.740328   10171 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 23:53:03.792444   10171 start.go:159] libmachine.API.Create for "bridge-262000" (driver="qemu2")
	I0914 23:53:03.792502   10171 client.go:168] LocalClient.Create starting
	I0914 23:53:03.792625   10171 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:53:03.792681   10171 main.go:141] libmachine: Decoding PEM data...
	I0914 23:53:03.792698   10171 main.go:141] libmachine: Parsing certificate...
	I0914 23:53:03.792757   10171 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:53:03.792812   10171 main.go:141] libmachine: Decoding PEM data...
	I0914 23:53:03.792823   10171 main.go:141] libmachine: Parsing certificate...
	I0914 23:53:03.793347   10171 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:53:03.965993   10171 main.go:141] libmachine: Creating SSH key...
	I0914 23:53:04.020894   10171 main.go:141] libmachine: Creating Disk image...
	I0914 23:53:04.020899   10171 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:53:04.021140   10171 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/bridge-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/bridge-262000/disk.qcow2
	I0914 23:53:04.030306   10171 main.go:141] libmachine: STDOUT: 
	I0914 23:53:04.030326   10171 main.go:141] libmachine: STDERR: 
	I0914 23:53:04.030385   10171 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/bridge-262000/disk.qcow2 +20000M
	I0914 23:53:04.038260   10171 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:53:04.038274   10171 main.go:141] libmachine: STDERR: 
	I0914 23:53:04.038286   10171 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/bridge-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/bridge-262000/disk.qcow2
	I0914 23:53:04.038291   10171 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:53:04.038302   10171 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:53:04.038332   10171 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/bridge-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/bridge-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/bridge-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:e7:e7:06:bd:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/bridge-262000/disk.qcow2
	I0914 23:53:04.039942   10171 main.go:141] libmachine: STDOUT: 
	I0914 23:53:04.039955   10171 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:53:04.039966   10171 client.go:171] duration metric: took 247.462166ms to LocalClient.Create
	I0914 23:53:06.042116   10171 start.go:128] duration metric: took 2.320417541s to createHost
	I0914 23:53:06.042177   10171 start.go:83] releasing machines lock for "bridge-262000", held for 2.3208985s
	W0914 23:53:06.042505   10171 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:53:06.056221   10171 out.go:201] 
	W0914 23:53:06.061293   10171 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:53:06.061320   10171 out.go:270] * 
	* 
	W0914 23:53:06.064118   10171 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:53:06.073086   10171 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-262000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-262000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.914343166s)

                                                
                                                
-- stdout --
	* [kubenet-262000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-262000" primary control-plane node in "kubenet-262000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-262000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:53:08.321194   10280 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:53:08.321350   10280 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:53:08.321353   10280 out.go:358] Setting ErrFile to fd 2...
	I0914 23:53:08.321356   10280 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:53:08.321494   10280 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:53:08.322557   10280 out.go:352] Setting JSON to false
	I0914 23:53:08.338939   10280 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6757,"bootTime":1726376431,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:53:08.339009   10280 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:53:08.345249   10280 out.go:177] * [kubenet-262000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:53:08.352156   10280 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:53:08.352205   10280 notify.go:220] Checking for updates...
	I0914 23:53:08.360172   10280 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:53:08.369133   10280 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:53:08.380197   10280 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:53:08.384122   10280 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:53:08.387165   10280 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:53:08.392449   10280 config.go:182] Loaded profile config "cert-expiration-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:53:08.392527   10280 config.go:182] Loaded profile config "multinode-053000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:53:08.392570   10280 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:53:08.397076   10280 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 23:53:08.405187   10280 start.go:297] selected driver: qemu2
	I0914 23:53:08.405195   10280 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:53:08.405201   10280 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:53:08.407912   10280 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:53:08.412116   10280 out.go:177] * Automatically selected the socket_vmnet network
	I0914 23:53:08.415291   10280 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:53:08.415309   10280 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0914 23:53:08.415357   10280 start.go:340] cluster config:
	{Name:kubenet-262000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:53:08.419360   10280 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:53:08.427147   10280 out.go:177] * Starting "kubenet-262000" primary control-plane node in "kubenet-262000" cluster
	I0914 23:53:08.431157   10280 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:53:08.431175   10280 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:53:08.431187   10280 cache.go:56] Caching tarball of preloaded images
	I0914 23:53:08.431263   10280 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:53:08.431276   10280 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:53:08.431348   10280 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/kubenet-262000/config.json ...
	I0914 23:53:08.431359   10280 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/kubenet-262000/config.json: {Name:mke16f6268e04993784fdebb9e364352bff96e09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:53:08.431588   10280 start.go:360] acquireMachinesLock for kubenet-262000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:53:08.431624   10280 start.go:364] duration metric: took 29.375µs to acquireMachinesLock for "kubenet-262000"
	I0914 23:53:08.431635   10280 start.go:93] Provisioning new machine with config: &{Name:kubenet-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:53:08.431680   10280 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:53:08.435218   10280 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 23:53:08.454505   10280 start.go:159] libmachine.API.Create for "kubenet-262000" (driver="qemu2")
	I0914 23:53:08.454533   10280 client.go:168] LocalClient.Create starting
	I0914 23:53:08.454616   10280 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:53:08.454650   10280 main.go:141] libmachine: Decoding PEM data...
	I0914 23:53:08.454660   10280 main.go:141] libmachine: Parsing certificate...
	I0914 23:53:08.454704   10280 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:53:08.454730   10280 main.go:141] libmachine: Decoding PEM data...
	I0914 23:53:08.454742   10280 main.go:141] libmachine: Parsing certificate...
	I0914 23:53:08.455124   10280 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:53:08.618935   10280 main.go:141] libmachine: Creating SSH key...
	I0914 23:53:08.751606   10280 main.go:141] libmachine: Creating Disk image...
	I0914 23:53:08.751612   10280 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:53:08.751855   10280 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubenet-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubenet-262000/disk.qcow2
	I0914 23:53:08.761615   10280 main.go:141] libmachine: STDOUT: 
	I0914 23:53:08.761631   10280 main.go:141] libmachine: STDERR: 
	I0914 23:53:08.761681   10280 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubenet-262000/disk.qcow2 +20000M
	I0914 23:53:08.769571   10280 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:53:08.769586   10280 main.go:141] libmachine: STDERR: 
	I0914 23:53:08.769604   10280 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubenet-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubenet-262000/disk.qcow2
	I0914 23:53:08.769609   10280 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:53:08.769620   10280 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:53:08.769646   10280 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubenet-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubenet-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubenet-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:77:47:30:df:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubenet-262000/disk.qcow2
	I0914 23:53:08.771248   10280 main.go:141] libmachine: STDOUT: 
	I0914 23:53:08.771261   10280 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:53:08.771281   10280 client.go:171] duration metric: took 316.743833ms to LocalClient.Create
	I0914 23:53:10.773481   10280 start.go:128] duration metric: took 2.341794584s to createHost
	I0914 23:53:10.773586   10280 start.go:83] releasing machines lock for "kubenet-262000", held for 2.341979042s
	W0914 23:53:10.773696   10280 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:53:10.781100   10280 out.go:177] * Deleting "kubenet-262000" in qemu2 ...
	W0914 23:53:10.820032   10280 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:53:10.820122   10280 start.go:729] Will try again in 5 seconds ...
	I0914 23:53:15.822267   10280 start.go:360] acquireMachinesLock for kubenet-262000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:53:15.822783   10280 start.go:364] duration metric: took 418.917µs to acquireMachinesLock for "kubenet-262000"
	I0914 23:53:15.822915   10280 start.go:93] Provisioning new machine with config: &{Name:kubenet-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:53:15.823231   10280 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:53:15.841104   10280 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 23:53:15.891313   10280 start.go:159] libmachine.API.Create for "kubenet-262000" (driver="qemu2")
	I0914 23:53:15.891370   10280 client.go:168] LocalClient.Create starting
	I0914 23:53:15.891496   10280 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:53:15.891555   10280 main.go:141] libmachine: Decoding PEM data...
	I0914 23:53:15.891571   10280 main.go:141] libmachine: Parsing certificate...
	I0914 23:53:15.891646   10280 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:53:15.891689   10280 main.go:141] libmachine: Decoding PEM data...
	I0914 23:53:15.891705   10280 main.go:141] libmachine: Parsing certificate...
	I0914 23:53:15.892217   10280 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:53:16.065710   10280 main.go:141] libmachine: Creating SSH key...
	I0914 23:53:16.142576   10280 main.go:141] libmachine: Creating Disk image...
	I0914 23:53:16.142581   10280 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:53:16.142829   10280 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubenet-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubenet-262000/disk.qcow2
	I0914 23:53:16.151983   10280 main.go:141] libmachine: STDOUT: 
	I0914 23:53:16.152002   10280 main.go:141] libmachine: STDERR: 
	I0914 23:53:16.152075   10280 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubenet-262000/disk.qcow2 +20000M
	I0914 23:53:16.159896   10280 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:53:16.159911   10280 main.go:141] libmachine: STDERR: 
	I0914 23:53:16.159926   10280 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubenet-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubenet-262000/disk.qcow2
	I0914 23:53:16.159931   10280 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:53:16.159938   10280 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:53:16.159974   10280 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubenet-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubenet-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubenet-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:71:69:51:59:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/kubenet-262000/disk.qcow2
	I0914 23:53:16.161617   10280 main.go:141] libmachine: STDOUT: 
	I0914 23:53:16.161633   10280 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:53:16.161645   10280 client.go:171] duration metric: took 270.272583ms to LocalClient.Create
	I0914 23:53:18.163846   10280 start.go:128] duration metric: took 2.340601458s to createHost
	I0914 23:53:18.163946   10280 start.go:83] releasing machines lock for "kubenet-262000", held for 2.341167459s
	W0914 23:53:18.164293   10280 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:53:18.178052   10280 out.go:201] 
	W0914 23:53:18.183150   10280 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:53:18.183189   10280 out.go:270] * 
	* 
	W0914 23:53:18.185549   10280 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:53:18.192994   10280 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-003000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-003000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.297095667s)

                                                
                                                
-- stdout --
	* [old-k8s-version-003000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-003000" primary control-plane node in "old-k8s-version-003000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-003000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:53:20.422854   10392 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:53:20.422996   10392 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:53:20.423000   10392 out.go:358] Setting ErrFile to fd 2...
	I0914 23:53:20.423002   10392 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:53:20.423127   10392 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:53:20.424177   10392 out.go:352] Setting JSON to false
	I0914 23:53:20.440213   10392 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6769,"bootTime":1726376431,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:53:20.440286   10392 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:53:20.446030   10392 out.go:177] * [old-k8s-version-003000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:53:20.453974   10392 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:53:20.454052   10392 notify.go:220] Checking for updates...
	I0914 23:53:20.462916   10392 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:53:20.465957   10392 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:53:20.467436   10392 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:53:20.470927   10392 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:53:20.473935   10392 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:53:20.477361   10392 config.go:182] Loaded profile config "cert-expiration-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:53:20.477431   10392 config.go:182] Loaded profile config "multinode-053000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:53:20.477491   10392 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:53:20.481871   10392 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 23:53:20.488943   10392 start.go:297] selected driver: qemu2
	I0914 23:53:20.488948   10392 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:53:20.488953   10392 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:53:20.491201   10392 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:53:20.493976   10392 out.go:177] * Automatically selected the socket_vmnet network
	I0914 23:53:20.496977   10392 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:53:20.496994   10392 cni.go:84] Creating CNI manager for ""
	I0914 23:53:20.497017   10392 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0914 23:53:20.497039   10392 start.go:340] cluster config:
	{Name:old-k8s-version-003000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-003000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:53:20.500802   10392 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:53:20.508928   10392 out.go:177] * Starting "old-k8s-version-003000" primary control-plane node in "old-k8s-version-003000" cluster
	I0914 23:53:20.512939   10392 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0914 23:53:20.512955   10392 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0914 23:53:20.512971   10392 cache.go:56] Caching tarball of preloaded images
	I0914 23:53:20.513046   10392 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:53:20.513053   10392 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0914 23:53:20.513119   10392 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/old-k8s-version-003000/config.json ...
	I0914 23:53:20.513140   10392 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/old-k8s-version-003000/config.json: {Name:mk3b32d1191f347d5cc6b337f0b773d8012ac4de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:53:20.513383   10392 start.go:360] acquireMachinesLock for old-k8s-version-003000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:53:20.513423   10392 start.go:364] duration metric: took 29.959µs to acquireMachinesLock for "old-k8s-version-003000"
	I0914 23:53:20.513434   10392 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-003000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-003000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:53:20.513464   10392 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:53:20.520987   10392 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 23:53:20.538617   10392 start.go:159] libmachine.API.Create for "old-k8s-version-003000" (driver="qemu2")
	I0914 23:53:20.538648   10392 client.go:168] LocalClient.Create starting
	I0914 23:53:20.538729   10392 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:53:20.538779   10392 main.go:141] libmachine: Decoding PEM data...
	I0914 23:53:20.538789   10392 main.go:141] libmachine: Parsing certificate...
	I0914 23:53:20.538825   10392 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:53:20.538849   10392 main.go:141] libmachine: Decoding PEM data...
	I0914 23:53:20.538858   10392 main.go:141] libmachine: Parsing certificate...
	I0914 23:53:20.539273   10392 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:53:20.700896   10392 main.go:141] libmachine: Creating SSH key...
	I0914 23:53:20.983291   10392 main.go:141] libmachine: Creating Disk image...
	I0914 23:53:20.983303   10392 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:53:20.983589   10392 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/disk.qcow2
	I0914 23:53:20.993511   10392 main.go:141] libmachine: STDOUT: 
	I0914 23:53:20.993612   10392 main.go:141] libmachine: STDERR: 
	I0914 23:53:20.993664   10392 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/disk.qcow2 +20000M
	I0914 23:53:21.001566   10392 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:53:21.001619   10392 main.go:141] libmachine: STDERR: 
	I0914 23:53:21.001638   10392 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/disk.qcow2
	I0914 23:53:21.001642   10392 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:53:21.001659   10392 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:53:21.001701   10392 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:e1:9f:98:d3:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/disk.qcow2
	I0914 23:53:21.003335   10392 main.go:141] libmachine: STDOUT: 
	I0914 23:53:21.003396   10392 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:53:21.003424   10392 client.go:171] duration metric: took 464.775375ms to LocalClient.Create
	I0914 23:53:23.005574   10392 start.go:128] duration metric: took 2.4921215s to createHost
	I0914 23:53:23.005646   10392 start.go:83] releasing machines lock for "old-k8s-version-003000", held for 2.492242709s
	W0914 23:53:23.005697   10392 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:53:23.021891   10392 out.go:177] * Deleting "old-k8s-version-003000" in qemu2 ...
	W0914 23:53:23.051806   10392 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:53:23.051824   10392 start.go:729] Will try again in 5 seconds ...
	I0914 23:53:28.053899   10392 start.go:360] acquireMachinesLock for old-k8s-version-003000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:53:28.054342   10392 start.go:364] duration metric: took 325.25µs to acquireMachinesLock for "old-k8s-version-003000"
	I0914 23:53:28.054461   10392 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-003000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-003000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:53:28.054766   10392 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:53:28.072205   10392 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 23:53:28.122167   10392 start.go:159] libmachine.API.Create for "old-k8s-version-003000" (driver="qemu2")
	I0914 23:53:28.122213   10392 client.go:168] LocalClient.Create starting
	I0914 23:53:28.122336   10392 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:53:28.122399   10392 main.go:141] libmachine: Decoding PEM data...
	I0914 23:53:28.122415   10392 main.go:141] libmachine: Parsing certificate...
	I0914 23:53:28.122493   10392 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:53:28.122538   10392 main.go:141] libmachine: Decoding PEM data...
	I0914 23:53:28.122550   10392 main.go:141] libmachine: Parsing certificate...
	I0914 23:53:28.123162   10392 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:53:28.297681   10392 main.go:141] libmachine: Creating SSH key...
	I0914 23:53:28.628959   10392 main.go:141] libmachine: Creating Disk image...
	I0914 23:53:28.628976   10392 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:53:28.629270   10392 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/disk.qcow2
	I0914 23:53:28.639385   10392 main.go:141] libmachine: STDOUT: 
	I0914 23:53:28.639403   10392 main.go:141] libmachine: STDERR: 
	I0914 23:53:28.639465   10392 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/disk.qcow2 +20000M
	I0914 23:53:28.647585   10392 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:53:28.647611   10392 main.go:141] libmachine: STDERR: 
	I0914 23:53:28.647626   10392 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/disk.qcow2
	I0914 23:53:28.647633   10392 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:53:28.647640   10392 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:53:28.647673   10392 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:ac:9e:f6:1f:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/disk.qcow2
	I0914 23:53:28.649343   10392 main.go:141] libmachine: STDOUT: 
	I0914 23:53:28.649356   10392 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:53:28.649370   10392 client.go:171] duration metric: took 527.156459ms to LocalClient.Create
	I0914 23:53:30.651519   10392 start.go:128] duration metric: took 2.596753958s to createHost
	I0914 23:53:30.651579   10392 start.go:83] releasing machines lock for "old-k8s-version-003000", held for 2.597240292s
	W0914 23:53:30.652034   10392 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-003000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-003000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:53:30.662526   10392 out.go:201] 
	W0914 23:53:30.666609   10392 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:53:30.666636   10392 out.go:270] * 
	* 
	W0914 23:53:30.669198   10392 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:53:30.677555   10392 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-003000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000: exit status 7 (66.7455ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-003000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-003000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-003000 create -f testdata/busybox.yaml: exit status 1 (29.555834ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-003000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-003000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000: exit status 7 (31.268167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-003000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000: exit status 7 (30.750417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-003000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-003000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-003000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-003000 describe deploy/metrics-server -n kube-system: exit status 1 (26.822667ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-003000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-003000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000: exit status 7 (30.963375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-003000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-003000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-003000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.192679417s)

                                                
                                                
-- stdout --
	* [old-k8s-version-003000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-003000" primary control-plane node in "old-k8s-version-003000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-003000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-003000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:53:34.415994   10440 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:53:34.416137   10440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:53:34.416141   10440 out.go:358] Setting ErrFile to fd 2...
	I0914 23:53:34.416143   10440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:53:34.416257   10440 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:53:34.417204   10440 out.go:352] Setting JSON to false
	I0914 23:53:34.433620   10440 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6783,"bootTime":1726376431,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:53:34.433694   10440 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:53:34.438206   10440 out.go:177] * [old-k8s-version-003000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:53:34.445178   10440 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:53:34.445250   10440 notify.go:220] Checking for updates...
	I0914 23:53:34.453207   10440 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:53:34.456134   10440 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:53:34.459150   10440 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:53:34.462253   10440 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:53:34.465114   10440 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:53:34.468518   10440 config.go:182] Loaded profile config "old-k8s-version-003000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0914 23:53:34.472156   10440 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0914 23:53:34.475126   10440 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:53:34.479119   10440 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 23:53:34.486114   10440 start.go:297] selected driver: qemu2
	I0914 23:53:34.486119   10440 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-003000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-003000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:53:34.486170   10440 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:53:34.488748   10440 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:53:34.488772   10440 cni.go:84] Creating CNI manager for ""
	I0914 23:53:34.488795   10440 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0914 23:53:34.488815   10440 start.go:340] cluster config:
	{Name:old-k8s-version-003000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-003000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:53:34.492426   10440 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:53:34.501013   10440 out.go:177] * Starting "old-k8s-version-003000" primary control-plane node in "old-k8s-version-003000" cluster
	I0914 23:53:34.505192   10440 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0914 23:53:34.505207   10440 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0914 23:53:34.505219   10440 cache.go:56] Caching tarball of preloaded images
	I0914 23:53:34.505284   10440 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:53:34.505290   10440 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0914 23:53:34.505376   10440 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/old-k8s-version-003000/config.json ...
	I0914 23:53:34.505868   10440 start.go:360] acquireMachinesLock for old-k8s-version-003000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:53:34.505897   10440 start.go:364] duration metric: took 22.792µs to acquireMachinesLock for "old-k8s-version-003000"
	I0914 23:53:34.505905   10440 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:53:34.505911   10440 fix.go:54] fixHost starting: 
	I0914 23:53:34.506030   10440 fix.go:112] recreateIfNeeded on old-k8s-version-003000: state=Stopped err=<nil>
	W0914 23:53:34.506038   10440 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:53:34.510144   10440 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-003000" ...
	I0914 23:53:34.518175   10440 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:53:34.518213   10440 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:ac:9e:f6:1f:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/disk.qcow2
	I0914 23:53:34.520149   10440 main.go:141] libmachine: STDOUT: 
	I0914 23:53:34.520169   10440 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:53:34.520199   10440 fix.go:56] duration metric: took 14.289042ms for fixHost
	I0914 23:53:34.520205   10440 start.go:83] releasing machines lock for "old-k8s-version-003000", held for 14.303792ms
	W0914 23:53:34.520211   10440 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:53:34.520244   10440 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:53:34.520249   10440 start.go:729] Will try again in 5 seconds ...
	I0914 23:53:39.522456   10440 start.go:360] acquireMachinesLock for old-k8s-version-003000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:53:39.522963   10440 start.go:364] duration metric: took 387.875µs to acquireMachinesLock for "old-k8s-version-003000"
	I0914 23:53:39.523109   10440 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:53:39.523130   10440 fix.go:54] fixHost starting: 
	I0914 23:53:39.523861   10440 fix.go:112] recreateIfNeeded on old-k8s-version-003000: state=Stopped err=<nil>
	W0914 23:53:39.523888   10440 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:53:39.533312   10440 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-003000" ...
	I0914 23:53:39.537313   10440 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:53:39.537541   10440 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:ac:9e:f6:1f:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/old-k8s-version-003000/disk.qcow2
	I0914 23:53:39.547416   10440 main.go:141] libmachine: STDOUT: 
	I0914 23:53:39.547492   10440 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:53:39.547591   10440 fix.go:56] duration metric: took 24.461083ms for fixHost
	I0914 23:53:39.547619   10440 start.go:83] releasing machines lock for "old-k8s-version-003000", held for 24.624708ms
	W0914 23:53:39.547889   10440 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-003000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-003000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:53:39.555369   10440 out.go:201] 
	W0914 23:53:39.559520   10440 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:53:39.559546   10440 out.go:270] * 
	* 
	W0914 23:53:39.561918   10440 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:53:39.567299   10440 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-003000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000: exit status 7 (68.681166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-003000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-003000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000: exit status 7 (32.324459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-003000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-003000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-003000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-003000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.42275ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-003000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-003000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000: exit status 7 (31.015917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-003000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-003000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000: exit status 7 (30.162041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-003000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-003000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-003000 --alsologtostderr -v=1: exit status 83 (43.431583ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-003000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-003000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:53:39.841337   10462 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:53:39.841725   10462 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:53:39.841729   10462 out.go:358] Setting ErrFile to fd 2...
	I0914 23:53:39.841731   10462 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:53:39.841893   10462 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:53:39.842080   10462 out.go:352] Setting JSON to false
	I0914 23:53:39.842086   10462 mustload.go:65] Loading cluster: old-k8s-version-003000
	I0914 23:53:39.842300   10462 config.go:182] Loaded profile config "old-k8s-version-003000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0914 23:53:39.847014   10462 out.go:177] * The control-plane node old-k8s-version-003000 host is not running: state=Stopped
	I0914 23:53:39.850870   10462 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-003000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-003000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000: exit status 7 (30.760708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-003000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000: exit status 7 (31.167791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-003000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-835000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-835000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.824283334s)

                                                
                                                
-- stdout --
	* [no-preload-835000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-835000" primary control-plane node in "no-preload-835000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-835000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:53:40.168850   10479 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:53:40.168990   10479 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:53:40.168993   10479 out.go:358] Setting ErrFile to fd 2...
	I0914 23:53:40.168995   10479 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:53:40.169147   10479 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:53:40.170190   10479 out.go:352] Setting JSON to false
	I0914 23:53:40.186347   10479 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6789,"bootTime":1726376431,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:53:40.186418   10479 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:53:40.190998   10479 out.go:177] * [no-preload-835000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:53:40.198986   10479 notify.go:220] Checking for updates...
	I0914 23:53:40.201959   10479 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:53:40.209915   10479 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:53:40.213798   10479 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:53:40.216909   10479 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:53:40.219948   10479 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:53:40.222980   10479 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:53:40.227301   10479 config.go:182] Loaded profile config "cert-expiration-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:53:40.227366   10479 config.go:182] Loaded profile config "multinode-053000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:53:40.227414   10479 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:53:40.230955   10479 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 23:53:40.235963   10479 start.go:297] selected driver: qemu2
	I0914 23:53:40.235969   10479 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:53:40.235974   10479 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:53:40.238280   10479 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:53:40.241935   10479 out.go:177] * Automatically selected the socket_vmnet network
	I0914 23:53:40.243094   10479 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:53:40.243109   10479 cni.go:84] Creating CNI manager for ""
	I0914 23:53:40.243135   10479 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:53:40.243143   10479 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 23:53:40.243164   10479 start.go:340] cluster config:
	{Name:no-preload-835000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-835000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:53:40.246743   10479 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:53:40.255005   10479 out.go:177] * Starting "no-preload-835000" primary control-plane node in "no-preload-835000" cluster
	I0914 23:53:40.258927   10479 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:53:40.258999   10479 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/no-preload-835000/config.json ...
	I0914 23:53:40.259027   10479 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/no-preload-835000/config.json: {Name:mk314f37ab0195af54088f3dbbdad568426d316b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:53:40.259027   10479 cache.go:107] acquiring lock: {Name:mk514f94bfdd47feb2d2a83a732e5d28cc5e1120 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:53:40.259043   10479 cache.go:107] acquiring lock: {Name:mk319b169cb0c436b253da41c17aa46a35c8ca88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:53:40.259037   10479 cache.go:107] acquiring lock: {Name:mk33fefe7f0a407a90d917e46d8804946985e905 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:53:40.259095   10479 cache.go:115] /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0914 23:53:40.259101   10479 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 80.583µs
	I0914 23:53:40.259108   10479 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0914 23:53:40.259115   10479 cache.go:107] acquiring lock: {Name:mk308e4680d821aba51a0b773a6b9d963cae433e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:53:40.259219   10479 cache.go:107] acquiring lock: {Name:mk9d2f38e48cbce845262f4d45b0aa842200f0db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:53:40.259259   10479 cache.go:107] acquiring lock: {Name:mkea9d31fb237568bde5423344ff8542cc1b9f7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:53:40.259248   10479 cache.go:107] acquiring lock: {Name:mk610d37ab4343ee1fb19269d918f31f6526072f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:53:40.259271   10479 cache.go:107] acquiring lock: {Name:mkc0fdd2b6e0e821ddd6cdc40f809eb72ceca98d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:53:40.259245   10479 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 23:53:40.259354   10479 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 23:53:40.259428   10479 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0914 23:53:40.259485   10479 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 23:53:40.259539   10479 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 23:53:40.259556   10479 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 23:53:40.259600   10479 start.go:360] acquireMachinesLock for no-preload-835000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:53:40.259637   10479 start.go:364] duration metric: took 30.167µs to acquireMachinesLock for "no-preload-835000"
	I0914 23:53:40.259650   10479 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0914 23:53:40.259649   10479 start.go:93] Provisioning new machine with config: &{Name:no-preload-835000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-835000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:53:40.259688   10479 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:53:40.267928   10479 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 23:53:40.272532   10479 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0914 23:53:40.273467   10479 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 23:53:40.273575   10479 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 23:53:40.273598   10479 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0914 23:53:40.273689   10479 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 23:53:40.273776   10479 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 23:53:40.273975   10479 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 23:53:40.286419   10479 start.go:159] libmachine.API.Create for "no-preload-835000" (driver="qemu2")
	I0914 23:53:40.286444   10479 client.go:168] LocalClient.Create starting
	I0914 23:53:40.286544   10479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:53:40.286580   10479 main.go:141] libmachine: Decoding PEM data...
	I0914 23:53:40.286593   10479 main.go:141] libmachine: Parsing certificate...
	I0914 23:53:40.286642   10479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:53:40.286667   10479 main.go:141] libmachine: Decoding PEM data...
	I0914 23:53:40.286677   10479 main.go:141] libmachine: Parsing certificate...
	I0914 23:53:40.287097   10479 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:53:40.454391   10479 main.go:141] libmachine: Creating SSH key...
	I0914 23:53:40.483510   10479 main.go:141] libmachine: Creating Disk image...
	I0914 23:53:40.483528   10479 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:53:40.483789   10479 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/disk.qcow2
	I0914 23:53:40.493813   10479 main.go:141] libmachine: STDOUT: 
	I0914 23:53:40.493848   10479 main.go:141] libmachine: STDERR: 
	I0914 23:53:40.493900   10479 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/disk.qcow2 +20000M
	I0914 23:53:40.502745   10479 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:53:40.502786   10479 main.go:141] libmachine: STDERR: 
	I0914 23:53:40.502811   10479 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/disk.qcow2
	I0914 23:53:40.502817   10479 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:53:40.502831   10479 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:53:40.502863   10479 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:19:e3:cb:a4:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/disk.qcow2
	I0914 23:53:40.505059   10479 main.go:141] libmachine: STDOUT: 
	I0914 23:53:40.505078   10479 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:53:40.505096   10479 client.go:171] duration metric: took 218.650708ms to LocalClient.Create
	I0914 23:53:40.693617   10479 cache.go:162] opening:  /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0914 23:53:40.693617   10479 cache.go:162] opening:  /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0914 23:53:40.706048   10479 cache.go:162] opening:  /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I0914 23:53:40.711365   10479 cache.go:162] opening:  /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I0914 23:53:40.734661   10479 cache.go:162] opening:  /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I0914 23:53:40.741138   10479 cache.go:162] opening:  /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I0914 23:53:40.745651   10479 cache.go:162] opening:  /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0914 23:53:40.812855   10479 cache.go:157] /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0914 23:53:40.812900   10479 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 553.746083ms
	I0914 23:53:40.812924   10479 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0914 23:53:42.505350   10479 start.go:128] duration metric: took 2.245654042s to createHost
	I0914 23:53:42.505409   10479 start.go:83] releasing machines lock for "no-preload-835000", held for 2.245788792s
	W0914 23:53:42.505474   10479 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:53:42.526993   10479 out.go:177] * Deleting "no-preload-835000" in qemu2 ...
	W0914 23:53:42.561679   10479 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:53:42.561708   10479 start.go:729] Will try again in 5 seconds ...
	I0914 23:53:43.970177   10479 cache.go:157] /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0914 23:53:43.970225   10479 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 3.711071625s
	I0914 23:53:43.970250   10479 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0914 23:53:44.277771   10479 cache.go:157] /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0914 23:53:44.277826   10479 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 4.018846084s
	I0914 23:53:44.277857   10479 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0914 23:53:44.370038   10479 cache.go:157] /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0914 23:53:44.370084   10479 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 4.110898s
	I0914 23:53:44.370125   10479 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0914 23:53:45.108317   10479 cache.go:157] /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0914 23:53:45.108365   10479 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 4.849171875s
	I0914 23:53:45.108412   10479 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0914 23:53:45.216600   10479 cache.go:157] /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0914 23:53:45.216649   10479 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 4.957677834s
	I0914 23:53:45.216693   10479 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0914 23:53:47.561820   10479 start.go:360] acquireMachinesLock for no-preload-835000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:53:47.562278   10479 start.go:364] duration metric: took 382.667µs to acquireMachinesLock for "no-preload-835000"
	I0914 23:53:47.562403   10479 start.go:93] Provisioning new machine with config: &{Name:no-preload-835000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-835000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:53:47.562616   10479 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:53:47.572298   10479 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 23:53:47.624106   10479 start.go:159] libmachine.API.Create for "no-preload-835000" (driver="qemu2")
	I0914 23:53:47.624154   10479 client.go:168] LocalClient.Create starting
	I0914 23:53:47.624268   10479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:53:47.624331   10479 main.go:141] libmachine: Decoding PEM data...
	I0914 23:53:47.624354   10479 main.go:141] libmachine: Parsing certificate...
	I0914 23:53:47.624426   10479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:53:47.624474   10479 main.go:141] libmachine: Decoding PEM data...
	I0914 23:53:47.624491   10479 main.go:141] libmachine: Parsing certificate...
	I0914 23:53:47.625009   10479 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:53:47.801444   10479 main.go:141] libmachine: Creating SSH key...
	I0914 23:53:47.897380   10479 main.go:141] libmachine: Creating Disk image...
	I0914 23:53:47.897385   10479 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:53:47.897635   10479 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/disk.qcow2
	I0914 23:53:47.907129   10479 main.go:141] libmachine: STDOUT: 
	I0914 23:53:47.907156   10479 main.go:141] libmachine: STDERR: 
	I0914 23:53:47.907215   10479 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/disk.qcow2 +20000M
	I0914 23:53:47.915286   10479 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:53:47.915313   10479 main.go:141] libmachine: STDERR: 
	I0914 23:53:47.915330   10479 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/disk.qcow2
	I0914 23:53:47.915337   10479 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:53:47.915349   10479 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:53:47.915385   10479 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:87:f3:7e:8b:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/disk.qcow2
	I0914 23:53:47.917053   10479 main.go:141] libmachine: STDOUT: 
	I0914 23:53:47.917067   10479 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:53:47.917086   10479 client.go:171] duration metric: took 292.927875ms to LocalClient.Create
	I0914 23:53:48.616675   10479 cache.go:157] /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0914 23:53:48.616737   10479 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.35771525s
	I0914 23:53:48.616760   10479 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0914 23:53:48.616801   10479 cache.go:87] Successfully saved all images to host disk.
	I0914 23:53:49.919284   10479 start.go:128] duration metric: took 2.356633792s to createHost
	I0914 23:53:49.919327   10479 start.go:83] releasing machines lock for "no-preload-835000", held for 2.357051834s
	W0914 23:53:49.919688   10479 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-835000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-835000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:53:49.930257   10479 out.go:201] 
	W0914 23:53:49.936465   10479 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:53:49.936491   10479 out.go:270] * 
	* 
	W0914 23:53:49.939082   10479 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:53:49.949336   10479 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-835000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000: exit status 7 (66.858417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-835000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-835000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-835000 create -f testdata/busybox.yaml: exit status 1 (30.072625ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-835000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-835000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000: exit status 7 (31.2655ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-835000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000: exit status 7 (30.612958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-835000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-835000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-835000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-835000 describe deploy/metrics-server -n kube-system: exit status 1 (27.250083ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-835000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-835000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000: exit status 7 (31.225708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-835000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-835000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-835000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.18264225s)

                                                
                                                
-- stdout --
	* [no-preload-835000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-835000" primary control-plane node in "no-preload-835000" cluster
	* Restarting existing qemu2 VM for "no-preload-835000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-835000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:53:53.896235   10559 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:53:53.896381   10559 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:53:53.896384   10559 out.go:358] Setting ErrFile to fd 2...
	I0914 23:53:53.896387   10559 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:53:53.896513   10559 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:53:53.897532   10559 out.go:352] Setting JSON to false
	I0914 23:53:53.913646   10559 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6802,"bootTime":1726376431,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:53:53.913720   10559 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:53:53.919024   10559 out.go:177] * [no-preload-835000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:53:53.926025   10559 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:53:53.926058   10559 notify.go:220] Checking for updates...
	I0914 23:53:53.933846   10559 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:53:53.937012   10559 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:53:53.940014   10559 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:53:53.943019   10559 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:53:53.946092   10559 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:53:53.949264   10559 config.go:182] Loaded profile config "no-preload-835000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:53:53.949528   10559 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:53:53.953049   10559 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 23:53:53.959965   10559 start.go:297] selected driver: qemu2
	I0914 23:53:53.959976   10559 start.go:901] validating driver "qemu2" against &{Name:no-preload-835000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-835000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:53:53.960027   10559 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:53:53.962420   10559 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:53:53.962447   10559 cni.go:84] Creating CNI manager for ""
	I0914 23:53:53.962471   10559 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:53:53.962502   10559 start.go:340] cluster config:
	{Name:no-preload-835000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-835000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:53:53.966160   10559 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:53:53.974956   10559 out.go:177] * Starting "no-preload-835000" primary control-plane node in "no-preload-835000" cluster
	I0914 23:53:53.979023   10559 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:53:53.979113   10559 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/no-preload-835000/config.json ...
	I0914 23:53:53.979140   10559 cache.go:107] acquiring lock: {Name:mk514f94bfdd47feb2d2a83a732e5d28cc5e1120 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:53:53.979140   10559 cache.go:107] acquiring lock: {Name:mk319b169cb0c436b253da41c17aa46a35c8ca88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:53:53.979147   10559 cache.go:107] acquiring lock: {Name:mk610d37ab4343ee1fb19269d918f31f6526072f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:53:53.979200   10559 cache.go:115] /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0914 23:53:53.979211   10559 cache.go:115] /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0914 23:53:53.979214   10559 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 87.625µs
	I0914 23:53:53.979221   10559 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 92.959µs
	I0914 23:53:53.979227   10559 cache.go:115] /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0914 23:53:53.979234   10559 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 96.625µs
	I0914 23:53:53.979239   10559 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0914 23:53:53.979230   10559 cache.go:107] acquiring lock: {Name:mk308e4680d821aba51a0b773a6b9d963cae433e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:53:53.979234   10559 cache.go:107] acquiring lock: {Name:mkc0fdd2b6e0e821ddd6cdc40f809eb72ceca98d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:53:53.979249   10559 cache.go:107] acquiring lock: {Name:mkea9d31fb237568bde5423344ff8542cc1b9f7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:53:53.979227   10559 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0914 23:53:53.979225   10559 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0914 23:53:53.979283   10559 cache.go:115] /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0914 23:53:53.979285   10559 cache.go:115] /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0914 23:53:53.979290   10559 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 60.333µs
	I0914 23:53:53.979298   10559 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0914 23:53:53.979291   10559 cache.go:115] /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0914 23:53:53.979302   10559 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 61.959µs
	I0914 23:53:53.979307   10559 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0914 23:53:53.979292   10559 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 58.5µs
	I0914 23:53:53.979316   10559 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0914 23:53:53.979271   10559 cache.go:107] acquiring lock: {Name:mk9d2f38e48cbce845262f4d45b0aa842200f0db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:53:53.979321   10559 cache.go:107] acquiring lock: {Name:mk33fefe7f0a407a90d917e46d8804946985e905 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:53:53.979364   10559 cache.go:115] /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0914 23:53:53.979369   10559 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 120.166µs
	I0914 23:53:53.979374   10559 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0914 23:53:53.979370   10559 cache.go:115] /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0914 23:53:53.979378   10559 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 153.625µs
	I0914 23:53:53.979383   10559 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0914 23:53:53.979388   10559 cache.go:87] Successfully saved all images to host disk.
	I0914 23:53:53.979591   10559 start.go:360] acquireMachinesLock for no-preload-835000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:53:53.979619   10559 start.go:364] duration metric: took 22.417µs to acquireMachinesLock for "no-preload-835000"
	I0914 23:53:53.979627   10559 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:53:53.979631   10559 fix.go:54] fixHost starting: 
	I0914 23:53:53.979755   10559 fix.go:112] recreateIfNeeded on no-preload-835000: state=Stopped err=<nil>
	W0914 23:53:53.979765   10559 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:53:53.988030   10559 out.go:177] * Restarting existing qemu2 VM for "no-preload-835000" ...
	I0914 23:53:53.992007   10559 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:53:53.992041   10559 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:87:f3:7e:8b:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/disk.qcow2
	I0914 23:53:53.994155   10559 main.go:141] libmachine: STDOUT: 
	I0914 23:53:53.994178   10559 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:53:53.994211   10559 fix.go:56] duration metric: took 14.577208ms for fixHost
	I0914 23:53:53.994216   10559 start.go:83] releasing machines lock for "no-preload-835000", held for 14.592917ms
	W0914 23:53:53.994224   10559 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:53:53.994259   10559 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:53:53.994264   10559 start.go:729] Will try again in 5 seconds ...
	I0914 23:53:58.996388   10559 start.go:360] acquireMachinesLock for no-preload-835000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:53:58.996818   10559 start.go:364] duration metric: took 340.084µs to acquireMachinesLock for "no-preload-835000"
	I0914 23:53:58.997006   10559 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:53:58.997024   10559 fix.go:54] fixHost starting: 
	I0914 23:53:58.997767   10559 fix.go:112] recreateIfNeeded on no-preload-835000: state=Stopped err=<nil>
	W0914 23:53:58.997791   10559 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:53:59.000031   10559 out.go:177] * Restarting existing qemu2 VM for "no-preload-835000" ...
	I0914 23:53:59.007174   10559 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:53:59.007395   10559 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:87:f3:7e:8b:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/no-preload-835000/disk.qcow2
	I0914 23:53:59.016069   10559 main.go:141] libmachine: STDOUT: 
	I0914 23:53:59.016129   10559 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:53:59.016195   10559 fix.go:56] duration metric: took 19.169708ms for fixHost
	I0914 23:53:59.016215   10559 start.go:83] releasing machines lock for "no-preload-835000", held for 19.353042ms
	W0914 23:53:59.016366   10559 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-835000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-835000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:53:59.023172   10559 out.go:201] 
	W0914 23:53:59.026341   10559 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:53:59.026369   10559 out.go:270] * 
	* 
	W0914 23:53:59.028983   10559 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:53:59.040073   10559 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-835000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000: exit status 7 (68.701334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-835000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-835000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000: exit status 7 (33.416583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-835000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-835000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-835000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-835000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.483375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-835000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-835000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000: exit status 7 (30.801875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-835000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-835000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000: exit status 7 (30.910125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-835000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-835000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-835000 --alsologtostderr -v=1: exit status 83 (42.448917ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-835000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-835000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:53:59.312935   10578 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:53:59.313102   10578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:53:59.313105   10578 out.go:358] Setting ErrFile to fd 2...
	I0914 23:53:59.313108   10578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:53:59.313248   10578 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:53:59.313481   10578 out.go:352] Setting JSON to false
	I0914 23:53:59.313486   10578 mustload.go:65] Loading cluster: no-preload-835000
	I0914 23:53:59.313706   10578 config.go:182] Loaded profile config "no-preload-835000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:53:59.317315   10578 out.go:177] * The control-plane node no-preload-835000 host is not running: state=Stopped
	I0914 23:53:59.321142   10578 out.go:177]   To start a cluster, run: "minikube start -p no-preload-835000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-835000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000: exit status 7 (30.578083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-835000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000: exit status 7 (30.590042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-835000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-185000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-185000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.861829542s)

                                                
                                                
-- stdout --
	* [embed-certs-185000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-185000" primary control-plane node in "embed-certs-185000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-185000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:53:59.635750   10595 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:53:59.635878   10595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:53:59.635881   10595 out.go:358] Setting ErrFile to fd 2...
	I0914 23:53:59.635884   10595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:53:59.636011   10595 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:53:59.637102   10595 out.go:352] Setting JSON to false
	I0914 23:53:59.653031   10595 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6808,"bootTime":1726376431,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:53:59.653103   10595 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:53:59.658170   10595 out.go:177] * [embed-certs-185000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:53:59.664161   10595 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:53:59.664213   10595 notify.go:220] Checking for updates...
	I0914 23:53:59.672127   10595 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:53:59.675059   10595 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:53:59.678138   10595 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:53:59.681237   10595 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:53:59.684078   10595 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:53:59.687361   10595 config.go:182] Loaded profile config "cert-expiration-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:53:59.687425   10595 config.go:182] Loaded profile config "multinode-053000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:53:59.687476   10595 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:53:59.692146   10595 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 23:53:59.699155   10595 start.go:297] selected driver: qemu2
	I0914 23:53:59.699163   10595 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:53:59.699171   10595 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:53:59.701413   10595 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:53:59.704125   10595 out.go:177] * Automatically selected the socket_vmnet network
	I0914 23:53:59.707170   10595 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:53:59.707194   10595 cni.go:84] Creating CNI manager for ""
	I0914 23:53:59.707229   10595 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:53:59.707236   10595 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 23:53:59.707268   10595 start.go:340] cluster config:
	{Name:embed-certs-185000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-185000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:53:59.711001   10595 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:53:59.720107   10595 out.go:177] * Starting "embed-certs-185000" primary control-plane node in "embed-certs-185000" cluster
	I0914 23:53:59.724126   10595 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:53:59.724141   10595 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:53:59.724156   10595 cache.go:56] Caching tarball of preloaded images
	I0914 23:53:59.724241   10595 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:53:59.724248   10595 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:53:59.724328   10595 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/embed-certs-185000/config.json ...
	I0914 23:53:59.724339   10595 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/embed-certs-185000/config.json: {Name:mk9204f0fb050e802f0ea1f83b8081b22066b4ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:53:59.724563   10595 start.go:360] acquireMachinesLock for embed-certs-185000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:53:59.724598   10595 start.go:364] duration metric: took 29.458µs to acquireMachinesLock for "embed-certs-185000"
	I0914 23:53:59.724610   10595 start.go:93] Provisioning new machine with config: &{Name:embed-certs-185000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-185000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:53:59.724649   10595 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:53:59.732072   10595 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 23:53:59.750644   10595 start.go:159] libmachine.API.Create for "embed-certs-185000" (driver="qemu2")
	I0914 23:53:59.750693   10595 client.go:168] LocalClient.Create starting
	I0914 23:53:59.750759   10595 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:53:59.750791   10595 main.go:141] libmachine: Decoding PEM data...
	I0914 23:53:59.750800   10595 main.go:141] libmachine: Parsing certificate...
	I0914 23:53:59.750834   10595 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:53:59.750858   10595 main.go:141] libmachine: Decoding PEM data...
	I0914 23:53:59.750871   10595 main.go:141] libmachine: Parsing certificate...
	I0914 23:53:59.751236   10595 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:53:59.912888   10595 main.go:141] libmachine: Creating SSH key...
	I0914 23:53:59.952284   10595 main.go:141] libmachine: Creating Disk image...
	I0914 23:53:59.952289   10595 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:53:59.952539   10595 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/disk.qcow2
	I0914 23:53:59.961799   10595 main.go:141] libmachine: STDOUT: 
	I0914 23:53:59.961824   10595 main.go:141] libmachine: STDERR: 
	I0914 23:53:59.961894   10595 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/disk.qcow2 +20000M
	I0914 23:53:59.969772   10595 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:53:59.969862   10595 main.go:141] libmachine: STDERR: 
	I0914 23:53:59.969875   10595 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/disk.qcow2
	I0914 23:53:59.969880   10595 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:53:59.969891   10595 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:53:59.969922   10595 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:06:a5:e2:84:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/disk.qcow2
	I0914 23:53:59.971538   10595 main.go:141] libmachine: STDOUT: 
	I0914 23:53:59.971552   10595 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:53:59.971572   10595 client.go:171] duration metric: took 220.874208ms to LocalClient.Create
	I0914 23:54:01.973761   10595 start.go:128] duration metric: took 2.2491155s to createHost
	I0914 23:54:01.973849   10595 start.go:83] releasing machines lock for "embed-certs-185000", held for 2.249267125s
	W0914 23:54:01.973902   10595 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:54:01.985303   10595 out.go:177] * Deleting "embed-certs-185000" in qemu2 ...
	W0914 23:54:02.015245   10595 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:54:02.015271   10595 start.go:729] Will try again in 5 seconds ...
	I0914 23:54:07.017383   10595 start.go:360] acquireMachinesLock for embed-certs-185000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:54:07.017831   10595 start.go:364] duration metric: took 364.542µs to acquireMachinesLock for "embed-certs-185000"
	I0914 23:54:07.018002   10595 start.go:93] Provisioning new machine with config: &{Name:embed-certs-185000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-185000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:54:07.018292   10595 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:54:07.026744   10595 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 23:54:07.078692   10595 start.go:159] libmachine.API.Create for "embed-certs-185000" (driver="qemu2")
	I0914 23:54:07.078746   10595 client.go:168] LocalClient.Create starting
	I0914 23:54:07.078869   10595 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:54:07.078946   10595 main.go:141] libmachine: Decoding PEM data...
	I0914 23:54:07.078966   10595 main.go:141] libmachine: Parsing certificate...
	I0914 23:54:07.079023   10595 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:54:07.079067   10595 main.go:141] libmachine: Decoding PEM data...
	I0914 23:54:07.079081   10595 main.go:141] libmachine: Parsing certificate...
	I0914 23:54:07.079625   10595 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:54:07.256369   10595 main.go:141] libmachine: Creating SSH key...
	I0914 23:54:07.405701   10595 main.go:141] libmachine: Creating Disk image...
	I0914 23:54:07.405707   10595 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:54:07.405984   10595 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/disk.qcow2
	I0914 23:54:07.415938   10595 main.go:141] libmachine: STDOUT: 
	I0914 23:54:07.415954   10595 main.go:141] libmachine: STDERR: 
	I0914 23:54:07.416020   10595 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/disk.qcow2 +20000M
	I0914 23:54:07.423976   10595 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:54:07.423993   10595 main.go:141] libmachine: STDERR: 
	I0914 23:54:07.424007   10595 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/disk.qcow2
	I0914 23:54:07.424012   10595 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:54:07.424020   10595 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:54:07.424059   10595 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:27:06:d6:0a:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/disk.qcow2
	I0914 23:54:07.425760   10595 main.go:141] libmachine: STDOUT: 
	I0914 23:54:07.425773   10595 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:54:07.425785   10595 client.go:171] duration metric: took 347.03675ms to LocalClient.Create
	I0914 23:54:09.427929   10595 start.go:128] duration metric: took 2.40963725s to createHost
	I0914 23:54:09.428004   10595 start.go:83] releasing machines lock for "embed-certs-185000", held for 2.410136709s
	W0914 23:54:09.428432   10595 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-185000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-185000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:54:09.437959   10595 out.go:201] 
	W0914 23:54:09.445093   10595 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:54:09.445116   10595 out.go:270] * 
	* 
	W0914 23:54:09.447797   10595 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:54:09.455023   10595 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-185000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000: exit status 7 (69.196209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-185000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-185000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-185000 create -f testdata/busybox.yaml: exit status 1 (29.605292ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-185000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-185000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000: exit status 7 (31.103917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-185000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000: exit status 7 (30.138958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-185000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-185000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-185000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-185000 describe deploy/metrics-server -n kube-system: exit status 1 (27.626875ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-185000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-185000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000: exit status 7 (30.142416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-185000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-185000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-185000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.191238625s)

                                                
                                                
-- stdout --
	* [embed-certs-185000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-185000" primary control-plane node in "embed-certs-185000" cluster
	* Restarting existing qemu2 VM for "embed-certs-185000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-185000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:54:13.797200   10645 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:54:13.797344   10645 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:54:13.797347   10645 out.go:358] Setting ErrFile to fd 2...
	I0914 23:54:13.797350   10645 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:54:13.797478   10645 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:54:13.798419   10645 out.go:352] Setting JSON to false
	I0914 23:54:13.814454   10645 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6822,"bootTime":1726376431,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:54:13.814527   10645 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:54:13.819955   10645 out.go:177] * [embed-certs-185000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:54:13.826920   10645 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:54:13.826977   10645 notify.go:220] Checking for updates...
	I0914 23:54:13.832933   10645 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:54:13.835894   10645 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:54:13.838899   10645 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:54:13.841906   10645 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:54:13.844903   10645 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:54:13.848228   10645 config.go:182] Loaded profile config "embed-certs-185000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:54:13.848482   10645 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:54:13.851867   10645 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 23:54:13.866896   10645 start.go:297] selected driver: qemu2
	I0914 23:54:13.866904   10645 start.go:901] validating driver "qemu2" against &{Name:embed-certs-185000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-185000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:54:13.866994   10645 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:54:13.869428   10645 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:54:13.869454   10645 cni.go:84] Creating CNI manager for ""
	I0914 23:54:13.869474   10645 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:54:13.869498   10645 start.go:340] cluster config:
	{Name:embed-certs-185000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-185000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:54:13.873256   10645 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:54:13.881916   10645 out.go:177] * Starting "embed-certs-185000" primary control-plane node in "embed-certs-185000" cluster
	I0914 23:54:13.884900   10645 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:54:13.884916   10645 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:54:13.884930   10645 cache.go:56] Caching tarball of preloaded images
	I0914 23:54:13.885000   10645 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:54:13.885006   10645 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:54:13.885081   10645 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/embed-certs-185000/config.json ...
	I0914 23:54:13.885585   10645 start.go:360] acquireMachinesLock for embed-certs-185000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:54:13.885621   10645 start.go:364] duration metric: took 29.791µs to acquireMachinesLock for "embed-certs-185000"
	I0914 23:54:13.885630   10645 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:54:13.885637   10645 fix.go:54] fixHost starting: 
	I0914 23:54:13.885768   10645 fix.go:112] recreateIfNeeded on embed-certs-185000: state=Stopped err=<nil>
	W0914 23:54:13.885777   10645 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:54:13.889023   10645 out.go:177] * Restarting existing qemu2 VM for "embed-certs-185000" ...
	I0914 23:54:13.895925   10645 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:54:13.895971   10645 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:27:06:d6:0a:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/disk.qcow2
	I0914 23:54:13.898009   10645 main.go:141] libmachine: STDOUT: 
	I0914 23:54:13.898028   10645 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:54:13.898060   10645 fix.go:56] duration metric: took 12.423958ms for fixHost
	I0914 23:54:13.898064   10645 start.go:83] releasing machines lock for "embed-certs-185000", held for 12.438625ms
	W0914 23:54:13.898069   10645 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:54:13.898104   10645 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:54:13.898110   10645 start.go:729] Will try again in 5 seconds ...
	I0914 23:54:18.900265   10645 start.go:360] acquireMachinesLock for embed-certs-185000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:54:18.900782   10645 start.go:364] duration metric: took 382.125µs to acquireMachinesLock for "embed-certs-185000"
	I0914 23:54:18.900935   10645 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:54:18.900955   10645 fix.go:54] fixHost starting: 
	I0914 23:54:18.901665   10645 fix.go:112] recreateIfNeeded on embed-certs-185000: state=Stopped err=<nil>
	W0914 23:54:18.901691   10645 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:54:18.910058   10645 out.go:177] * Restarting existing qemu2 VM for "embed-certs-185000" ...
	I0914 23:54:18.913970   10645 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:54:18.914210   10645 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:27:06:d6:0a:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/embed-certs-185000/disk.qcow2
	I0914 23:54:18.923181   10645 main.go:141] libmachine: STDOUT: 
	I0914 23:54:18.923229   10645 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:54:18.923309   10645 fix.go:56] duration metric: took 22.353541ms for fixHost
	I0914 23:54:18.923322   10645 start.go:83] releasing machines lock for "embed-certs-185000", held for 22.487083ms
	W0914 23:54:18.923495   10645 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-185000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-185000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:54:18.930951   10645 out.go:201] 
	W0914 23:54:18.935093   10645 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:54:18.935126   10645 out.go:270] * 
	* 
	W0914 23:54:18.937684   10645 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:54:18.946029   10645 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-185000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000: exit status 7 (69.056583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-185000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-185000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000: exit status 7 (32.699291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-185000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-185000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-185000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-185000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.889708ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-185000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-185000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000: exit status 7 (29.677625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-185000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-185000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000: exit status 7 (29.495042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-185000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-185000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-185000 --alsologtostderr -v=1: exit status 83 (43.858209ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-185000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-185000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:54:19.215399   10671 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:54:19.215563   10671 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:54:19.215566   10671 out.go:358] Setting ErrFile to fd 2...
	I0914 23:54:19.215568   10671 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:54:19.215690   10671 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:54:19.215918   10671 out.go:352] Setting JSON to false
	I0914 23:54:19.215928   10671 mustload.go:65] Loading cluster: embed-certs-185000
	I0914 23:54:19.216143   10671 config.go:182] Loaded profile config "embed-certs-185000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:54:19.220372   10671 out.go:177] * The control-plane node embed-certs-185000 host is not running: state=Stopped
	I0914 23:54:19.228404   10671 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-185000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-185000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000: exit status 7 (29.42025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-185000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000: exit status 7 (28.326208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-185000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-233000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-233000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.818105167s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-233000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-233000" primary control-plane node in "default-k8s-diff-port-233000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-233000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:54:19.630901   10698 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:54:19.631036   10698 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:54:19.631039   10698 out.go:358] Setting ErrFile to fd 2...
	I0914 23:54:19.631042   10698 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:54:19.631171   10698 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:54:19.632221   10698 out.go:352] Setting JSON to false
	I0914 23:54:19.648549   10698 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6828,"bootTime":1726376431,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:54:19.648618   10698 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:54:19.652345   10698 out.go:177] * [default-k8s-diff-port-233000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:54:19.659389   10698 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:54:19.659466   10698 notify.go:220] Checking for updates...
	I0914 23:54:19.667366   10698 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:54:19.670409   10698 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:54:19.673357   10698 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:54:19.676407   10698 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:54:19.679365   10698 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:54:19.682543   10698 config.go:182] Loaded profile config "cert-expiration-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:54:19.682607   10698 config.go:182] Loaded profile config "multinode-053000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:54:19.682664   10698 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:54:19.686420   10698 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 23:54:19.693341   10698 start.go:297] selected driver: qemu2
	I0914 23:54:19.693347   10698 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:54:19.693355   10698 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:54:19.695657   10698 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:54:19.699370   10698 out.go:177] * Automatically selected the socket_vmnet network
	I0914 23:54:19.702482   10698 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:54:19.702509   10698 cni.go:84] Creating CNI manager for ""
	I0914 23:54:19.702538   10698 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:54:19.702545   10698 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 23:54:19.702577   10698 start.go:340] cluster config:
	{Name:default-k8s-diff-port-233000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-233000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:54:19.706268   10698 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:54:19.714356   10698 out.go:177] * Starting "default-k8s-diff-port-233000" primary control-plane node in "default-k8s-diff-port-233000" cluster
	I0914 23:54:19.718174   10698 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:54:19.718192   10698 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:54:19.718203   10698 cache.go:56] Caching tarball of preloaded images
	I0914 23:54:19.718271   10698 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:54:19.718278   10698 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:54:19.718343   10698 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/default-k8s-diff-port-233000/config.json ...
	I0914 23:54:19.718354   10698 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/default-k8s-diff-port-233000/config.json: {Name:mkab5a015d5e28095b322cf9d5acc02df5b595b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:54:19.718584   10698 start.go:360] acquireMachinesLock for default-k8s-diff-port-233000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:54:19.718622   10698 start.go:364] duration metric: took 30.083µs to acquireMachinesLock for "default-k8s-diff-port-233000"
	I0914 23:54:19.718635   10698 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-233000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-233000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:54:19.718681   10698 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:54:19.726378   10698 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 23:54:19.743939   10698 start.go:159] libmachine.API.Create for "default-k8s-diff-port-233000" (driver="qemu2")
	I0914 23:54:19.743973   10698 client.go:168] LocalClient.Create starting
	I0914 23:54:19.744042   10698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:54:19.744074   10698 main.go:141] libmachine: Decoding PEM data...
	I0914 23:54:19.744084   10698 main.go:141] libmachine: Parsing certificate...
	I0914 23:54:19.744120   10698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:54:19.744145   10698 main.go:141] libmachine: Decoding PEM data...
	I0914 23:54:19.744154   10698 main.go:141] libmachine: Parsing certificate...
	I0914 23:54:19.744525   10698 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:54:19.907646   10698 main.go:141] libmachine: Creating SSH key...
	I0914 23:54:19.957624   10698 main.go:141] libmachine: Creating Disk image...
	I0914 23:54:19.957629   10698 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:54:19.957867   10698 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/disk.qcow2
	I0914 23:54:19.966859   10698 main.go:141] libmachine: STDOUT: 
	I0914 23:54:19.966876   10698 main.go:141] libmachine: STDERR: 
	I0914 23:54:19.966928   10698 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/disk.qcow2 +20000M
	I0914 23:54:19.974812   10698 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:54:19.974830   10698 main.go:141] libmachine: STDERR: 
	I0914 23:54:19.974848   10698 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/disk.qcow2
	I0914 23:54:19.974854   10698 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:54:19.974867   10698 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:54:19.974895   10698 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:79:c2:7f:b9:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/disk.qcow2
	I0914 23:54:19.976501   10698 main.go:141] libmachine: STDOUT: 
	I0914 23:54:19.976515   10698 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:54:19.976533   10698 client.go:171] duration metric: took 232.557375ms to LocalClient.Create
	I0914 23:54:21.978673   10698 start.go:128] duration metric: took 2.259998458s to createHost
	I0914 23:54:21.978728   10698 start.go:83] releasing machines lock for "default-k8s-diff-port-233000", held for 2.260122167s
	W0914 23:54:21.978782   10698 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:54:21.995937   10698 out.go:177] * Deleting "default-k8s-diff-port-233000" in qemu2 ...
	W0914 23:54:22.024596   10698 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:54:22.024622   10698 start.go:729] Will try again in 5 seconds ...
	I0914 23:54:27.026778   10698 start.go:360] acquireMachinesLock for default-k8s-diff-port-233000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:54:27.027300   10698 start.go:364] duration metric: took 403.5µs to acquireMachinesLock for "default-k8s-diff-port-233000"
	I0914 23:54:27.027425   10698 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-233000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-233000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:54:27.027704   10698 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:54:27.033522   10698 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 23:54:27.082317   10698 start.go:159] libmachine.API.Create for "default-k8s-diff-port-233000" (driver="qemu2")
	I0914 23:54:27.082367   10698 client.go:168] LocalClient.Create starting
	I0914 23:54:27.082481   10698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:54:27.082544   10698 main.go:141] libmachine: Decoding PEM data...
	I0914 23:54:27.082561   10698 main.go:141] libmachine: Parsing certificate...
	I0914 23:54:27.082636   10698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:54:27.082695   10698 main.go:141] libmachine: Decoding PEM data...
	I0914 23:54:27.082709   10698 main.go:141] libmachine: Parsing certificate...
	I0914 23:54:27.083311   10698 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:54:27.269921   10698 main.go:141] libmachine: Creating SSH key...
	I0914 23:54:27.345120   10698 main.go:141] libmachine: Creating Disk image...
	I0914 23:54:27.345125   10698 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:54:27.345317   10698 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/disk.qcow2
	I0914 23:54:27.354340   10698 main.go:141] libmachine: STDOUT: 
	I0914 23:54:27.354377   10698 main.go:141] libmachine: STDERR: 
	I0914 23:54:27.354441   10698 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/disk.qcow2 +20000M
	I0914 23:54:27.362317   10698 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:54:27.362332   10698 main.go:141] libmachine: STDERR: 
	I0914 23:54:27.362347   10698 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/disk.qcow2
	I0914 23:54:27.362352   10698 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:54:27.362360   10698 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:54:27.362399   10698 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:bb:a9:4b:bc:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/disk.qcow2
	I0914 23:54:27.364008   10698 main.go:141] libmachine: STDOUT: 
	I0914 23:54:27.364021   10698 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:54:27.364034   10698 client.go:171] duration metric: took 281.663792ms to LocalClient.Create
	I0914 23:54:29.366182   10698 start.go:128] duration metric: took 2.338477166s to createHost
	I0914 23:54:29.366243   10698 start.go:83] releasing machines lock for "default-k8s-diff-port-233000", held for 2.338943208s
	W0914 23:54:29.366658   10698 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-233000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-233000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:54:29.387261   10698 out.go:201] 
	W0914 23:54:29.394445   10698 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:54:29.394479   10698 out.go:270] * 
	* 
	W0914 23:54:29.396819   10698 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:54:29.407495   10698 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-233000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-233000 -n default-k8s-diff-port-233000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-233000 -n default-k8s-diff-port-233000: exit status 7 (66.301958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-233000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-529000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-529000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.873732125s)

                                                
                                                
-- stdout --
	* [newest-cni-529000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-529000" primary control-plane node in "newest-cni-529000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-529000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:54:23.448098   10714 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:54:23.448228   10714 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:54:23.448232   10714 out.go:358] Setting ErrFile to fd 2...
	I0914 23:54:23.448234   10714 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:54:23.448357   10714 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:54:23.449419   10714 out.go:352] Setting JSON to false
	I0914 23:54:23.465529   10714 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6832,"bootTime":1726376431,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:54:23.465611   10714 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:54:23.472689   10714 out.go:177] * [newest-cni-529000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:54:23.480599   10714 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:54:23.480616   10714 notify.go:220] Checking for updates...
	I0914 23:54:23.489464   10714 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:54:23.492515   10714 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:54:23.495482   10714 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:54:23.498484   10714 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:54:23.501490   10714 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:54:23.504759   10714 config.go:182] Loaded profile config "default-k8s-diff-port-233000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:54:23.504825   10714 config.go:182] Loaded profile config "multinode-053000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:54:23.504883   10714 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:54:23.508400   10714 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 23:54:23.516132   10714 start.go:297] selected driver: qemu2
	I0914 23:54:23.516140   10714 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:54:23.516148   10714 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:54:23.518682   10714 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0914 23:54:23.518749   10714 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0914 23:54:23.526526   10714 out.go:177] * Automatically selected the socket_vmnet network
	I0914 23:54:23.529520   10714 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0914 23:54:23.529539   10714 cni.go:84] Creating CNI manager for ""
	I0914 23:54:23.529576   10714 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:54:23.529581   10714 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 23:54:23.529619   10714 start.go:340] cluster config:
	{Name:newest-cni-529000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-529000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:54:23.533577   10714 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:54:23.542500   10714 out.go:177] * Starting "newest-cni-529000" primary control-plane node in "newest-cni-529000" cluster
	I0914 23:54:23.546474   10714 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:54:23.546488   10714 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:54:23.546496   10714 cache.go:56] Caching tarball of preloaded images
	I0914 23:54:23.546561   10714 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:54:23.546573   10714 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:54:23.546636   10714 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/newest-cni-529000/config.json ...
	I0914 23:54:23.546650   10714 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/newest-cni-529000/config.json: {Name:mk0f60a75b1b9b980516dd5a504882fd29ccbb54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:54:23.546882   10714 start.go:360] acquireMachinesLock for newest-cni-529000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:54:23.546917   10714 start.go:364] duration metric: took 29.125µs to acquireMachinesLock for "newest-cni-529000"
	I0914 23:54:23.546928   10714 start.go:93] Provisioning new machine with config: &{Name:newest-cni-529000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-529000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:54:23.546973   10714 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:54:23.554452   10714 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 23:54:23.573470   10714 start.go:159] libmachine.API.Create for "newest-cni-529000" (driver="qemu2")
	I0914 23:54:23.573500   10714 client.go:168] LocalClient.Create starting
	I0914 23:54:23.573573   10714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:54:23.573607   10714 main.go:141] libmachine: Decoding PEM data...
	I0914 23:54:23.573621   10714 main.go:141] libmachine: Parsing certificate...
	I0914 23:54:23.573657   10714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:54:23.573682   10714 main.go:141] libmachine: Decoding PEM data...
	I0914 23:54:23.573687   10714 main.go:141] libmachine: Parsing certificate...
	I0914 23:54:23.574138   10714 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:54:23.734266   10714 main.go:141] libmachine: Creating SSH key...
	I0914 23:54:23.891052   10714 main.go:141] libmachine: Creating Disk image...
	I0914 23:54:23.891059   10714 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:54:23.891364   10714 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/disk.qcow2
	I0914 23:54:23.900735   10714 main.go:141] libmachine: STDOUT: 
	I0914 23:54:23.900752   10714 main.go:141] libmachine: STDERR: 
	I0914 23:54:23.900820   10714 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/disk.qcow2 +20000M
	I0914 23:54:23.908679   10714 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:54:23.908693   10714 main.go:141] libmachine: STDERR: 
	I0914 23:54:23.908703   10714 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/disk.qcow2
	I0914 23:54:23.908708   10714 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:54:23.908733   10714 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:54:23.908773   10714 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:86:31:fb:cf:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/disk.qcow2
	I0914 23:54:23.910375   10714 main.go:141] libmachine: STDOUT: 
	I0914 23:54:23.910389   10714 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:54:23.910419   10714 client.go:171] duration metric: took 336.916584ms to LocalClient.Create
	I0914 23:54:25.912591   10714 start.go:128] duration metric: took 2.365595875s to createHost
	I0914 23:54:25.912661   10714 start.go:83] releasing machines lock for "newest-cni-529000", held for 2.365762708s
	W0914 23:54:25.912709   10714 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:54:25.926905   10714 out.go:177] * Deleting "newest-cni-529000" in qemu2 ...
	W0914 23:54:25.959317   10714 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:54:25.959338   10714 start.go:729] Will try again in 5 seconds ...
	I0914 23:54:30.961536   10714 start.go:360] acquireMachinesLock for newest-cni-529000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:54:30.962003   10714 start.go:364] duration metric: took 361.5µs to acquireMachinesLock for "newest-cni-529000"
	I0914 23:54:30.962190   10714 start.go:93] Provisioning new machine with config: &{Name:newest-cni-529000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-529000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 23:54:30.962502   10714 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 23:54:30.972297   10714 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 23:54:31.021126   10714 start.go:159] libmachine.API.Create for "newest-cni-529000" (driver="qemu2")
	I0914 23:54:31.021187   10714 client.go:168] LocalClient.Create starting
	I0914 23:54:31.021285   10714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/ca.pem
	I0914 23:54:31.021339   10714 main.go:141] libmachine: Decoding PEM data...
	I0914 23:54:31.021356   10714 main.go:141] libmachine: Parsing certificate...
	I0914 23:54:31.021422   10714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19644-6577/.minikube/certs/cert.pem
	I0914 23:54:31.021454   10714 main.go:141] libmachine: Decoding PEM data...
	I0914 23:54:31.021470   10714 main.go:141] libmachine: Parsing certificate...
	I0914 23:54:31.022013   10714 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0914 23:54:31.192528   10714 main.go:141] libmachine: Creating SSH key...
	I0914 23:54:31.237769   10714 main.go:141] libmachine: Creating Disk image...
	I0914 23:54:31.237774   10714 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 23:54:31.238013   10714 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/disk.qcow2.raw /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/disk.qcow2
	I0914 23:54:31.247259   10714 main.go:141] libmachine: STDOUT: 
	I0914 23:54:31.247276   10714 main.go:141] libmachine: STDERR: 
	I0914 23:54:31.247327   10714 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/disk.qcow2 +20000M
	I0914 23:54:31.255245   10714 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 23:54:31.255260   10714 main.go:141] libmachine: STDERR: 
	I0914 23:54:31.255271   10714 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/disk.qcow2
	I0914 23:54:31.255277   10714 main.go:141] libmachine: Starting QEMU VM...
	I0914 23:54:31.255288   10714 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:54:31.255325   10714 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:51:2b:c3:b2:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/disk.qcow2
	I0914 23:54:31.256950   10714 main.go:141] libmachine: STDOUT: 
	I0914 23:54:31.256965   10714 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:54:31.256976   10714 client.go:171] duration metric: took 235.787333ms to LocalClient.Create
	I0914 23:54:33.257787   10714 start.go:128] duration metric: took 2.295284833s to createHost
	I0914 23:54:33.257813   10714 start.go:83] releasing machines lock for "newest-cni-529000", held for 2.295816667s
	W0914 23:54:33.257902   10714 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-529000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-529000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:54:33.269560   10714 out.go:201] 
	W0914 23:54:33.273544   10714 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:54:33.273552   10714 out.go:270] * 
	* 
	W0914 23:54:33.274290   10714 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:54:33.288534   10714 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-529000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000: exit status 7 (37.731541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-233000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-233000 create -f testdata/busybox.yaml: exit status 1 (29.585333ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-233000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-233000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-233000 -n default-k8s-diff-port-233000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-233000 -n default-k8s-diff-port-233000: exit status 7 (29.558375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-233000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-233000 -n default-k8s-diff-port-233000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-233000 -n default-k8s-diff-port-233000: exit status 7 (29.236208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-233000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-233000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-233000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-233000 describe deploy/metrics-server -n kube-system: exit status 1 (26.757ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-233000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-233000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-233000 -n default-k8s-diff-port-233000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-233000 -n default-k8s-diff-port-233000: exit status 7 (29.696375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-233000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-233000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-233000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.227042041s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-233000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-233000" primary control-plane node in "default-k8s-diff-port-233000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-233000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-233000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:54:33.138537   10766 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:54:33.138704   10766 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:54:33.138716   10766 out.go:358] Setting ErrFile to fd 2...
	I0914 23:54:33.138719   10766 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:54:33.138860   10766 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:54:33.139938   10766 out.go:352] Setting JSON to false
	I0914 23:54:33.156096   10766 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6842,"bootTime":1726376431,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:54:33.156171   10766 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:54:33.161478   10766 out.go:177] * [default-k8s-diff-port-233000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:54:33.169447   10766 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:54:33.169515   10766 notify.go:220] Checking for updates...
	I0914 23:54:33.177528   10766 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:54:33.180479   10766 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:54:33.183537   10766 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:54:33.186622   10766 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:54:33.188059   10766 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:54:33.191792   10766 config.go:182] Loaded profile config "default-k8s-diff-port-233000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:54:33.192059   10766 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:54:33.196508   10766 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 23:54:33.201533   10766 start.go:297] selected driver: qemu2
	I0914 23:54:33.201540   10766 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-233000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-233000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:54:33.201602   10766 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:54:33.203921   10766 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:54:33.203948   10766 cni.go:84] Creating CNI manager for ""
	I0914 23:54:33.203972   10766 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:54:33.204000   10766 start.go:340] cluster config:
	{Name:default-k8s-diff-port-233000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-233000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:54:33.207620   10766 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:54:33.214518   10766 out.go:177] * Starting "default-k8s-diff-port-233000" primary control-plane node in "default-k8s-diff-port-233000" cluster
	I0914 23:54:33.222656   10766 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:54:33.222672   10766 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:54:33.222692   10766 cache.go:56] Caching tarball of preloaded images
	I0914 23:54:33.222759   10766 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:54:33.222764   10766 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:54:33.222834   10766 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/default-k8s-diff-port-233000/config.json ...
	I0914 23:54:33.223341   10766 start.go:360] acquireMachinesLock for default-k8s-diff-port-233000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:54:33.257846   10766 start.go:364] duration metric: took 34.497459ms to acquireMachinesLock for "default-k8s-diff-port-233000"
	I0914 23:54:33.257861   10766 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:54:33.257868   10766 fix.go:54] fixHost starting: 
	I0914 23:54:33.258020   10766 fix.go:112] recreateIfNeeded on default-k8s-diff-port-233000: state=Stopped err=<nil>
	W0914 23:54:33.258032   10766 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:54:33.269551   10766 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-233000" ...
	I0914 23:54:33.273541   10766 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:54:33.273598   10766 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:bb:a9:4b:bc:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/disk.qcow2
	I0914 23:54:33.276127   10766 main.go:141] libmachine: STDOUT: 
	I0914 23:54:33.276150   10766 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:54:33.276182   10766 fix.go:56] duration metric: took 18.3175ms for fixHost
	I0914 23:54:33.276189   10766 start.go:83] releasing machines lock for "default-k8s-diff-port-233000", held for 18.335083ms
	W0914 23:54:33.276195   10766 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:54:33.276245   10766 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:54:33.276251   10766 start.go:729] Will try again in 5 seconds ...
	I0914 23:54:38.278418   10766 start.go:360] acquireMachinesLock for default-k8s-diff-port-233000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:54:38.278849   10766 start.go:364] duration metric: took 323.166µs to acquireMachinesLock for "default-k8s-diff-port-233000"
	I0914 23:54:38.278987   10766 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:54:38.279007   10766 fix.go:54] fixHost starting: 
	I0914 23:54:38.279715   10766 fix.go:112] recreateIfNeeded on default-k8s-diff-port-233000: state=Stopped err=<nil>
	W0914 23:54:38.279740   10766 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:54:38.289336   10766 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-233000" ...
	I0914 23:54:38.292294   10766 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:54:38.292541   10766 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:bb:a9:4b:bc:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/default-k8s-diff-port-233000/disk.qcow2
	I0914 23:54:38.301505   10766 main.go:141] libmachine: STDOUT: 
	I0914 23:54:38.301568   10766 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:54:38.301647   10766 fix.go:56] duration metric: took 22.639666ms for fixHost
	I0914 23:54:38.301666   10766 start.go:83] releasing machines lock for "default-k8s-diff-port-233000", held for 22.795334ms
	W0914 23:54:38.301844   10766 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-233000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-233000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:54:38.309338   10766 out.go:201] 
	W0914 23:54:38.313280   10766 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:54:38.313304   10766 out.go:270] * 
	* 
	W0914 23:54:38.315746   10766 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:54:38.324319   10766 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-233000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-233000 -n default-k8s-diff-port-233000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-233000 -n default-k8s-diff-port-233000: exit status 7 (67.1835ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-233000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-529000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-529000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.183365875s)

                                                
                                                
-- stdout --
	* [newest-cni-529000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-529000" primary control-plane node in "newest-cni-529000" cluster
	* Restarting existing qemu2 VM for "newest-cni-529000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-529000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:54:35.607454   10793 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:54:35.607604   10793 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:54:35.607607   10793 out.go:358] Setting ErrFile to fd 2...
	I0914 23:54:35.607610   10793 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:54:35.607754   10793 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:54:35.608786   10793 out.go:352] Setting JSON to false
	I0914 23:54:35.624759   10793 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6844,"bootTime":1726376431,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:54:35.624833   10793 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:54:35.629684   10793 out.go:177] * [newest-cni-529000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:54:35.635615   10793 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:54:35.635688   10793 notify.go:220] Checking for updates...
	I0914 23:54:35.643633   10793 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:54:35.646681   10793 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:54:35.649681   10793 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:54:35.652667   10793 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:54:35.655675   10793 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:54:35.658976   10793 config.go:182] Loaded profile config "newest-cni-529000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:54:35.659246   10793 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:54:35.663701   10793 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 23:54:35.670663   10793 start.go:297] selected driver: qemu2
	I0914 23:54:35.670670   10793 start.go:901] validating driver "qemu2" against &{Name:newest-cni-529000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-529000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:54:35.670742   10793 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:54:35.673039   10793 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0914 23:54:35.673061   10793 cni.go:84] Creating CNI manager for ""
	I0914 23:54:35.673086   10793 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:54:35.673107   10793 start.go:340] cluster config:
	{Name:newest-cni-529000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-529000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:54:35.676725   10793 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:54:35.683629   10793 out.go:177] * Starting "newest-cni-529000" primary control-plane node in "newest-cni-529000" cluster
	I0914 23:54:35.687700   10793 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:54:35.687720   10793 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:54:35.687734   10793 cache.go:56] Caching tarball of preloaded images
	I0914 23:54:35.687795   10793 preload.go:172] Found /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 23:54:35.687801   10793 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:54:35.687871   10793 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/newest-cni-529000/config.json ...
	I0914 23:54:35.688312   10793 start.go:360] acquireMachinesLock for newest-cni-529000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:54:35.688339   10793 start.go:364] duration metric: took 21.667µs to acquireMachinesLock for "newest-cni-529000"
	I0914 23:54:35.688347   10793 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:54:35.688353   10793 fix.go:54] fixHost starting: 
	I0914 23:54:35.688471   10793 fix.go:112] recreateIfNeeded on newest-cni-529000: state=Stopped err=<nil>
	W0914 23:54:35.688479   10793 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:54:35.692628   10793 out.go:177] * Restarting existing qemu2 VM for "newest-cni-529000" ...
	I0914 23:54:35.700466   10793 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:54:35.700501   10793 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:51:2b:c3:b2:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/disk.qcow2
	I0914 23:54:35.702471   10793 main.go:141] libmachine: STDOUT: 
	I0914 23:54:35.702490   10793 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:54:35.702520   10793 fix.go:56] duration metric: took 14.16775ms for fixHost
	I0914 23:54:35.702524   10793 start.go:83] releasing machines lock for "newest-cni-529000", held for 14.18125ms
	W0914 23:54:35.702530   10793 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:54:35.702556   10793 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:54:35.702561   10793 start.go:729] Will try again in 5 seconds ...
	I0914 23:54:40.704751   10793 start.go:360] acquireMachinesLock for newest-cni-529000: {Name:mk786b517beff5d4f4d36abeacef9431a20ff039 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:54:40.705372   10793 start.go:364] duration metric: took 520.333µs to acquireMachinesLock for "newest-cni-529000"
	I0914 23:54:40.705527   10793 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:54:40.705549   10793 fix.go:54] fixHost starting: 
	I0914 23:54:40.706313   10793 fix.go:112] recreateIfNeeded on newest-cni-529000: state=Stopped err=<nil>
	W0914 23:54:40.706339   10793 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 23:54:40.714753   10793 out.go:177] * Restarting existing qemu2 VM for "newest-cni-529000" ...
	I0914 23:54:40.717692   10793 qemu.go:418] Using hvf for hardware acceleration
	I0914 23:54:40.717938   10793 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:51:2b:c3:b2:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19644-6577/.minikube/machines/newest-cni-529000/disk.qcow2
	I0914 23:54:40.727693   10793 main.go:141] libmachine: STDOUT: 
	I0914 23:54:40.727744   10793 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 23:54:40.727813   10793 fix.go:56] duration metric: took 22.26775ms for fixHost
	I0914 23:54:40.727828   10793 start.go:83] releasing machines lock for "newest-cni-529000", held for 22.431875ms
	W0914 23:54:40.727983   10793 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-529000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-529000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 23:54:40.736774   10793 out.go:201] 
	W0914 23:54:40.739689   10793 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 23:54:40.739716   10793 out.go:270] * 
	* 
	W0914 23:54:40.742538   10793 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:54:40.749728   10793 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-529000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000: exit status 7 (68.589167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-233000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-233000 -n default-k8s-diff-port-233000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-233000 -n default-k8s-diff-port-233000: exit status 7 (32.763833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-233000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-233000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-233000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-233000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.228625ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-233000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-233000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-233000 -n default-k8s-diff-port-233000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-233000 -n default-k8s-diff-port-233000: exit status 7 (29.607708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-233000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-233000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-233000 -n default-k8s-diff-port-233000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-233000 -n default-k8s-diff-port-233000: exit status 7 (29.526333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-233000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-233000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-233000 --alsologtostderr -v=1: exit status 83 (42.189917ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-233000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-233000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:54:38.593485   10812 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:54:38.593634   10812 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:54:38.593637   10812 out.go:358] Setting ErrFile to fd 2...
	I0914 23:54:38.593640   10812 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:54:38.593765   10812 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:54:38.594006   10812 out.go:352] Setting JSON to false
	I0914 23:54:38.594010   10812 mustload.go:65] Loading cluster: default-k8s-diff-port-233000
	I0914 23:54:38.594234   10812 config.go:182] Loaded profile config "default-k8s-diff-port-233000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:54:38.599089   10812 out.go:177] * The control-plane node default-k8s-diff-port-233000 host is not running: state=Stopped
	I0914 23:54:38.602953   10812 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-233000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-233000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-233000 -n default-k8s-diff-port-233000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-233000 -n default-k8s-diff-port-233000: exit status 7 (29.022666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-233000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-233000 -n default-k8s-diff-port-233000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-233000 -n default-k8s-diff-port-233000: exit status 7 (28.527708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-233000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-529000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000: exit status 7 (30.756916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-529000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-529000 --alsologtostderr -v=1: exit status 83 (43.772125ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-529000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-529000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:54:40.936436   10836 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:54:40.936600   10836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:54:40.936603   10836 out.go:358] Setting ErrFile to fd 2...
	I0914 23:54:40.936606   10836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:54:40.936739   10836 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:54:40.936958   10836 out.go:352] Setting JSON to false
	I0914 23:54:40.936964   10836 mustload.go:65] Loading cluster: newest-cni-529000
	I0914 23:54:40.937188   10836 config.go:182] Loaded profile config "newest-cni-529000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:54:40.942050   10836 out.go:177] * The control-plane node newest-cni-529000 host is not running: state=Stopped
	I0914 23:54:40.946080   10836 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-529000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-529000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000: exit status 7 (30.501ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-529000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000: exit status 7 (31.40425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (79/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.1/json-events 6.37
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.1
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 10.28
39 TestErrorSpam/start 0.39
40 TestErrorSpam/status 0.09
41 TestErrorSpam/pause 0.12
42 TestErrorSpam/unpause 0.12
43 TestErrorSpam/stop 7.22
46 TestFunctional/serial/CopySyncFile 0.01
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.72
55 TestFunctional/serial/CacheCmd/cache/add_local 1.19
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.03
60 TestFunctional/serial/CacheCmd/cache/delete 0.07
69 TestFunctional/parallel/ConfigCmd 0.24
71 TestFunctional/parallel/DryRun 0.23
72 TestFunctional/parallel/InternationalLanguage 0.11
78 TestFunctional/parallel/AddonsCmd 0.1
93 TestFunctional/parallel/License 0.21
96 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
106 TestFunctional/parallel/ProfileCmd/profile_not_create 0.09
107 TestFunctional/parallel/ProfileCmd/profile_list 0.08
108 TestFunctional/parallel/ProfileCmd/profile_json_output 0.08
112 TestFunctional/parallel/Version/short 0.04
119 TestFunctional/parallel/ImageCommands/Setup 1.85
124 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
126 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.07
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
135 TestFunctional/delete_echo-server_images 0.08
136 TestFunctional/delete_my-image_image 0.02
137 TestFunctional/delete_minikube_cached_images 0.02
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 3.34
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.2
193 TestMainNoArgs 0.03
238 TestStoppedBinaryUpgrade/Setup 0.96
240 TestStoppedBinaryUpgrade/MinikubeLogs 0.77
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
257 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
258 TestNoKubernetes/serial/ProfileList 0.1
259 TestNoKubernetes/serial/Stop 3.19
261 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.06
275 TestStartStop/group/old-k8s-version/serial/Stop 3.29
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
286 TestStartStop/group/no-preload/serial/Stop 3.5
287 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
297 TestStartStop/group/embed-certs/serial/Stop 3.89
298 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.29
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
313 TestStartStop/group/newest-cni/serial/DeployApp 0
314 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
315 TestStartStop/group/newest-cni/serial/Stop 2.07
316 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-312000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-312000: exit status 85 (98.8215ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-312000 | jenkins | v1.34.0 | 14 Sep 24 23:28 PDT |          |
	|         | -p download-only-312000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 23:28:44
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 23:28:44.902977    7095 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:28:44.903125    7095 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:28:44.903129    7095 out.go:358] Setting ErrFile to fd 2...
	I0914 23:28:44.903132    7095 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:28:44.903255    7095 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	W0914 23:28:44.903352    7095 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19644-6577/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19644-6577/.minikube/config/config.json: no such file or directory
	I0914 23:28:44.904731    7095 out.go:352] Setting JSON to true
	I0914 23:28:44.922895    7095 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5293,"bootTime":1726376431,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:28:44.922970    7095 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:28:44.927812    7095 out.go:97] [download-only-312000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:28:44.927953    7095 notify.go:220] Checking for updates...
	W0914 23:28:44.928083    7095 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball: no such file or directory
	I0914 23:28:44.931984    7095 out.go:169] MINIKUBE_LOCATION=19644
	I0914 23:28:44.935806    7095 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:28:44.940310    7095 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:28:44.944079    7095 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:28:44.948826    7095 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	W0914 23:28:44.956500    7095 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 23:28:44.956686    7095 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:28:44.960374    7095 out.go:97] Using the qemu2 driver based on user configuration
	I0914 23:28:44.960392    7095 start.go:297] selected driver: qemu2
	I0914 23:28:44.960406    7095 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:28:44.960470    7095 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:28:44.964217    7095 out.go:169] Automatically selected the socket_vmnet network
	I0914 23:28:44.970773    7095 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0914 23:28:44.970882    7095 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 23:28:44.970927    7095 cni.go:84] Creating CNI manager for ""
	I0914 23:28:44.970958    7095 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0914 23:28:44.971006    7095 start.go:340] cluster config:
	{Name:download-only-312000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-312000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:28:44.975147    7095 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:28:44.977779    7095 out.go:97] Downloading VM boot image ...
	I0914 23:28:44.977795    7095 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso
	I0914 23:28:52.110719    7095 out.go:97] Starting "download-only-312000" primary control-plane node in "download-only-312000" cluster
	I0914 23:28:52.110744    7095 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0914 23:28:52.167091    7095 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0914 23:28:52.167114    7095 cache.go:56] Caching tarball of preloaded images
	I0914 23:28:52.168299    7095 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0914 23:28:52.173258    7095 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0914 23:28:52.173264    7095 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0914 23:28:52.260996    7095 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0914 23:28:57.353129    7095 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0914 23:28:57.353295    7095 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0914 23:28:58.048920    7095 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0914 23:28:58.049146    7095 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/download-only-312000/config.json ...
	I0914 23:28:58.049163    7095 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/download-only-312000/config.json: {Name:mk3ecd4c85776eff039951c78276834f03d90b00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:28:58.050275    7095 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0914 23:28:58.050667    7095 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0914 23:28:58.696395    7095 out.go:193] 
	W0914 23:28:58.702304    7095 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19644-6577/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107125780 0x107125780 0x107125780 0x107125780 0x107125780 0x107125780 0x107125780] Decompressors:map[bz2:0x1400000fee0 gz:0x1400000fee8 tar:0x1400000fe60 tar.bz2:0x1400000fe70 tar.gz:0x1400000feb0 tar.xz:0x1400000fec0 tar.zst:0x1400000fed0 tbz2:0x1400000fe70 tgz:0x1400000feb0 txz:0x1400000fec0 tzst:0x1400000fed0 xz:0x1400000ff00 zip:0x1400000ff10 zst:0x1400000ff08] Getters:map[file:0x140018045b0 http:0x140008a4280 https:0x140008a42d0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0914 23:28:58.702329    7095 out_reason.go:110] 
	W0914 23:28:58.711164    7095 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:28:58.715204    7095 out.go:193] 
	
	
	* The control-plane node download-only-312000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-312000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-312000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-074000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-074000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (6.371165375s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-074000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-074000: exit status 85 (78.499583ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-312000 | jenkins | v1.34.0 | 14 Sep 24 23:28 PDT |                     |
	|         | -p download-only-312000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 14 Sep 24 23:28 PDT | 14 Sep 24 23:28 PDT |
	| delete  | -p download-only-312000        | download-only-312000 | jenkins | v1.34.0 | 14 Sep 24 23:28 PDT | 14 Sep 24 23:28 PDT |
	| start   | -o=json --download-only        | download-only-074000 | jenkins | v1.34.0 | 14 Sep 24 23:28 PDT |                     |
	|         | -p download-only-074000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 23:28:59
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 23:28:59.139360    7120 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:28:59.139492    7120 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:28:59.139495    7120 out.go:358] Setting ErrFile to fd 2...
	I0914 23:28:59.139497    7120 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:28:59.139630    7120 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:28:59.140700    7120 out.go:352] Setting JSON to true
	I0914 23:28:59.156694    7120 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5308,"bootTime":1726376431,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:28:59.156804    7120 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:28:59.161257    7120 out.go:97] [download-only-074000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:28:59.161328    7120 notify.go:220] Checking for updates...
	I0914 23:28:59.165241    7120 out.go:169] MINIKUBE_LOCATION=19644
	I0914 23:28:59.168260    7120 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:28:59.172280    7120 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:28:59.175291    7120 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:28:59.178151    7120 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	W0914 23:28:59.184251    7120 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 23:28:59.184432    7120 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:28:59.185793    7120 out.go:97] Using the qemu2 driver based on user configuration
	I0914 23:28:59.185801    7120 start.go:297] selected driver: qemu2
	I0914 23:28:59.185804    7120 start.go:901] validating driver "qemu2" against <nil>
	I0914 23:28:59.185846    7120 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 23:28:59.189246    7120 out.go:169] Automatically selected the socket_vmnet network
	I0914 23:28:59.194433    7120 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0914 23:28:59.194524    7120 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 23:28:59.194542    7120 cni.go:84] Creating CNI manager for ""
	I0914 23:28:59.194574    7120 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 23:28:59.194580    7120 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 23:28:59.194621    7120 start.go:340] cluster config:
	{Name:download-only-074000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-074000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:28:59.198130    7120 iso.go:125] acquiring lock: {Name:mkb6de57449004788d9d971827727b5763f0636f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:28:59.201288    7120 out.go:97] Starting "download-only-074000" primary control-plane node in "download-only-074000" cluster
	I0914 23:28:59.201297    7120 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:28:59.255013    7120 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:28:59.255040    7120 cache.go:56] Caching tarball of preloaded images
	I0914 23:28:59.255877    7120 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:28:59.260224    7120 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0914 23:28:59.260232    7120 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0914 23:28:59.334178    7120 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 23:29:03.391924    7120 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0914 23:29:03.392091    7120 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0914 23:29:03.913349    7120 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 23:29:03.913542    7120 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/download-only-074000/config.json ...
	I0914 23:29:03.913560    7120 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19644-6577/.minikube/profiles/download-only-074000/config.json: {Name:mk9decf0e110a98eeb24b9e1380fc6645beeb0c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:29:03.914237    7120 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 23:29:03.914376    7120 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19644-6577/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-074000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-074000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-074000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-013000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-013000: exit status 85 (58.769834ms)

                                                
                                                
-- stdout --
	* Profile "addons-013000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-013000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-013000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-013000: exit status 85 (56.015583ms)

                                                
                                                
-- stdout --
	* Profile "addons-013000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-013000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.28s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.28s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 status: exit status 7 (31.228458ms)

                                                
                                                
-- stdout --
	nospam-751000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 status: exit status 7 (30.842791ms)

                                                
                                                
-- stdout --
	nospam-751000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 status: exit status 7 (30.38075ms)

                                                
                                                
-- stdout --
	nospam-751000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 pause: exit status 83 (37.6415ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-751000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-751000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 pause: exit status 83 (41.00725ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-751000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-751000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 pause: exit status 83 (40.788875ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-751000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-751000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 unpause: exit status 83 (39.851833ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-751000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-751000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 unpause: exit status 83 (40.86475ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-751000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-751000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 unpause: exit status 83 (38.805334ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-751000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-751000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (7.22s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 stop: (2.035644708s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 stop: (3.340177834s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-751000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-751000 stop: (1.837196417s)
--- PASS: TestErrorSpam/stop (7.22s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19644-6577/.minikube/files/etc/test/nested/copy/7093/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-893000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2497962861/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 cache add minikube-local-cache-test:functional-893000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 cache delete minikube-local-cache-test:functional-893000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-893000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 config get cpus: exit status 14 (29.757917ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 config get cpus: exit status 14 (36.250125ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-893000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-893000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (113.435792ms)

                                                
                                                
-- stdout --
	* [functional-893000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:30:39.731986    7850 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:30:39.732121    7850 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:30:39.732125    7850 out.go:358] Setting ErrFile to fd 2...
	I0914 23:30:39.732127    7850 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:30:39.732253    7850 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:30:39.733208    7850 out.go:352] Setting JSON to false
	I0914 23:30:39.749391    7850 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5408,"bootTime":1726376431,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:30:39.749453    7850 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:30:39.752926    7850 out.go:177] * [functional-893000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 23:30:39.759948    7850 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:30:39.760000    7850 notify.go:220] Checking for updates...
	I0914 23:30:39.766886    7850 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:30:39.769951    7850 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:30:39.772931    7850 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:30:39.775964    7850 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:30:39.778931    7850 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:30:39.782027    7850 config.go:182] Loaded profile config "functional-893000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:30:39.782296    7850 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:30:39.785896    7850 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 23:30:39.791866    7850 start.go:297] selected driver: qemu2
	I0914 23:30:39.791873    7850 start.go:901] validating driver "qemu2" against &{Name:functional-893000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-893000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:30:39.791936    7850 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:30:39.798866    7850 out.go:201] 
	W0914 23:30:39.802952    7850 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0914 23:30:39.806877    7850 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-893000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-893000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-893000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (109.5975ms)

                                                
                                                
-- stdout --
	* [functional-893000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:30:39.615761    7846 out.go:345] Setting OutFile to fd 1 ...
	I0914 23:30:39.615864    7846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:30:39.615868    7846 out.go:358] Setting ErrFile to fd 2...
	I0914 23:30:39.615870    7846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 23:30:39.615990    7846 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19644-6577/.minikube/bin
	I0914 23:30:39.617331    7846 out.go:352] Setting JSON to false
	I0914 23:30:39.634047    7846 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5408,"bootTime":1726376431,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0914 23:30:39.634127    7846 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 23:30:39.639042    7846 out.go:177] * [functional-893000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0914 23:30:39.646899    7846 out.go:177]   - MINIKUBE_LOCATION=19644
	I0914 23:30:39.646972    7846 notify.go:220] Checking for updates...
	I0914 23:30:39.652937    7846 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	I0914 23:30:39.655949    7846 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 23:30:39.658916    7846 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:30:39.661805    7846 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	I0914 23:30:39.664892    7846 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:30:39.668200    7846 config.go:182] Loaded profile config "functional-893000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 23:30:39.668471    7846 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 23:30:39.671850    7846 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0914 23:30:39.678896    7846 start.go:297] selected driver: qemu2
	I0914 23:30:39.678903    7846 start.go:901] validating driver "qemu2" against &{Name:functional-893000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-893000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 23:30:39.678953    7846 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:30:39.684884    7846 out.go:201] 
	W0914 23:30:39.688917    7846 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0914 23:30:39.692897    7846 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-893000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "44.735666ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "32.662292ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "44.974417ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "33.702625ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.818374666s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-893000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 image rm kicbase/echo-server:functional-893000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-893000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 image save --daemon kicbase/echo-server:functional-893000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-893000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.01267075s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-893000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-893000
--- PASS: TestFunctional/delete_echo-server_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-893000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-893000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-733000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-733000 --output=json --user=testUser: (3.338755083s)
--- PASS: TestJSONOutput/stop/Command (3.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-847000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-847000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.868917ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"58a65be4-92b3-4a27-96e8-a1f88ec2fa6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-847000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"361c65c7-326d-4add-b6a6-7bf9135f7550","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19644"}}
	{"specversion":"1.0","id":"868d2db2-9b66-43d0-a468-8c1450cdd16a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig"}}
	{"specversion":"1.0","id":"13ed829b-4fe4-4105-9313-a724fa9b2d04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"766f0f47-f6a8-439c-ac4b-cdb1de6488fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a59ccb04-fff0-4b58-a0b3-099bcf22d960","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube"}}
	{"specversion":"1.0","id":"055d92a5-3016-42e6-a049-87ec8cceba5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"07bd1716-acdc-4e6f-8165-ca5f0d39ba37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-847000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-847000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-438000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-019000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-019000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (101.764792ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-019000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19644-6577/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19644-6577/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-019000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-019000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.893791ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-019000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-019000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-019000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-019000: (3.1862565s)
--- PASS: TestNoKubernetes/serial/Stop (3.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-019000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-019000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (59.15375ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-019000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-019000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-003000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-003000 --alsologtostderr -v=3: (3.2921105s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000: exit status 7 (60.82925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-003000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-835000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-835000 --alsologtostderr -v=3: (3.49592775s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000: exit status 7 (58.57175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-835000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-185000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-185000 --alsologtostderr -v=3: (3.891375667s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000: exit status 7 (61.499791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-185000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-233000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-233000 --alsologtostderr -v=3: (3.288412417s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-233000 -n default-k8s-diff-port-233000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-233000 -n default-k8s-diff-port-233000: exit status 7 (60.605292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-233000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-529000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-529000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-529000 --alsologtostderr -v=3: (2.070675833s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000: exit status 7 (55.100041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-529000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-893000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1260356647/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726381803351608000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1260356647/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726381803351608000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1260356647/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726381803351608000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1260356647/001/test-1726381803351608000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (56.217958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.205458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.798917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.518ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.217459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.467583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.488042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.891792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "sudo umount -f /mount-9p": exit status 83 (45.338875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-893000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-893000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1260356647/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (12.21s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (10.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-893000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2475946924/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (65.960709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.209208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.339208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.712875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.652083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.615209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.931875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "sudo umount -f /mount-9p": exit status 83 (44.836417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-893000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-893000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2475946924/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (10.72s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (13.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-893000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup144056901/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-893000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup144056901/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-893000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup144056901/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T" /mount1: exit status 83 (81.812333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T" /mount1: exit status 83 (85.515958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T" /mount1: exit status 83 (87.138167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T" /mount1: exit status 83 (83.40125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T" /mount1: exit status 83 (85.951125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T" /mount1: exit status 83 (89.52725ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-893000 ssh "findmnt -T" /mount1: exit status 83 (83.153667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-893000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-893000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup144056901/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-893000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup144056901/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-893000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup144056901/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (13.14s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-262000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-262000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-262000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-262000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-262000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-262000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-262000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-262000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-262000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-262000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-262000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-262000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-262000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-262000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-262000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-262000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-262000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-262000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-262000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-262000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-262000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-262000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-262000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-262000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-262000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-262000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-262000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-262000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-262000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-262000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-262000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-262000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-262000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262000"

                                                
                                                
----------------------- debugLogs end: cilium-262000 [took: 2.2412295s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-262000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-262000
--- SKIP: TestNetworkPlugins/group/cilium (2.35s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-430000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-430000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard